Close  

Let’s talk about how we can help you and your organization

Fill out the form to start the conversation. A 3M representative will reach out to you soon.


  • All fields are required unless indicated optional

  •  
  •  
  • 3M takes your privacy seriously. 3M and its authorized third parties will use the information you provided in accordance with our Privacy Policy to send you communications which may include promotions, product information and service offers. Please be aware that this information may be stored on a server located in the U.S. If you do not consent to this use of your personal information, please do not use this system.

  • Submit

Thank You

Your form was submitted successfully!

Our Apologies...

An error has occurred while submitting. Please try again later...

Medical professionals reviewing documentation on a tablet
Rethinking clinical documentation integrity.

3M CDI Innovation Webinar Series

Subscribe to upcoming webinars

3M CDI Innovation Webinar Series

For more than 20 years, clinical documentation integrity (CDI) experts have played a key role in the health care industry. As the industry evolves at a record pace, their work has never been more important, or more challenging. 3M is here to support that crucial work through a new 3M CDI Innovation Webinar Series.


On-demand webinars

  • 2023

    • Improve your CDI documentation by leveraging comprehensive AI technology

      • May 2023
      • Learn about practical applications and best practices for how 3M comprehensive clinical documentation integrity (CDI) technology can easily identify and flag potential documentation clarification opportunities in patient health records. This can help improve quality of care, support accurate coding, medical necessity and billing processes, and decrease the risk of regulatory violations or reimbursement denials. In addition, learn from our experts on how the application of compliant technology can help reduce the time required to resolve documentation issues, promoting greater efficiency and productivity within your organization.
      • Recording coming soon
    • (DESCRIPTION) Logo, 3M Health Information Systems. Text, March 2023 3M C D I Innovation Webinar: N L U, clinical content, and documentation integrity. (SPEECH) Good afternoon and welcome to our March CDI Innovation Webinar. Before we get started and before we get into our discussion around NLU and the clinical content and documentation integrity, I'm just going to go over a couple of housekeeping items before we go over to our speakers. So we are using the ON24 webinar platform. It is a great experience with a lot of different engagement tools. This is a web based platform. So if you are having any issues, close out of VPN, make sure you're using Google Chrome. That'll help with the actual platform as well as bandwidth, just making sure you're closing out of multiple tabs, that'll help. If you are having any speaker issues, there is no dial in number. So if you are having any audio issues, do a quick refresh and that typically solves any issues you might be having. Again, we have multiple engagement sections for a better experience for you. We have our Q&A section so definitely ask questions throughout and we'll get to the questions at the end. So put your questions into that feature there. And like I said, we'll get to those questions at the end. We do have a certificate of attendance available in the resources section. We also have the presentation handout if you'd like to download that and follow along, as well as a couple other resources and that is in the resources section of your dashboard. We also have closed captioning. So if you do need that feature, that is in the media player and that is in real time. So if you do need closed captioning, that is available. And like I said, if you do need the presentation as well as the certificate of attendance, that is in the resources section. And then at the end, we always appreciate feedback so please let us know how we did within that survey. All right. So our speakers today is Dannie Greenlee and Josh Arman. If you'd like to learn more about their experience and a little bit more about them, you can look at that in the speaker section and the meet the speakers section of the dashboard as well. And so I'm going to go ahead and turn things over to Josh to tell us about the agenda and get things started. Thanks, Lisa. Good afternoon, everyone. I'm so for today's agenda. I'm going to jump into some primary goals here in a minute that Dannie and I are hoping to cover. We're going to talk about AI technology and documentation. Yes, we're going to focus a little bit on the CDI lens. We're also going to talk about other ways in which AI can be used in your provider documentation. We're going to talk a little bit about 3Ms, content governance approach to our technology and content in review a use case related to heart failure. The primary goals that we're looking to cover are going to be achieving compliant documentation through artificial intelligence. We want to decrease the documentation burden on providers as it relates to CDI. We want to increase efficiency, accuracy, and consistency across different workflows. We want to have the ability to expand the CDI team's encounter coverage. We know that in the industry is a large focus. As well as intelligent prioritization driven by artificial intelligence. So looking at clinical factors, patient, and business focused factors as well as event driven factors. So in some of the slides coming up as we look into the use case, you'll see how those factors can help drive some of the prioritization. (DESCRIPTION) Slide title, Game-changing cloud-based AI technology. (SPEECH) Jumping right into AI technology. And I-- It is applying systemic reasoning and contextual understanding to data aggregated from your electronic medical record. So our AI is using the data from the EHR, we partner with multiple different EHR vendors in the industry to capture that data. The data that we are using is your primarily your provider documentation, your laboratory data and discrete laboratory data. So we use that data across an HL7 interface. We're not looking at just what the provider has documented into their documentation but we're looking at pulling that across an HL7 interface as well as the radiology results. Those two across an interface. So those are our three sources of data today as it stands. But we are looking to expand our data sources throughout this year to help capture additional data sources beyond just looking at a provider documentation. Continuously and automatically, reviewing, analyzing, monitor, and improving all the documentation, all the time, driving, consistency, and efficiency real time. So on the provider end the provider does not need to really click anything in their workflow to see if there's something that's been identified by the AI we're able to push that to the provider real time while they're working on their documentation. In our AI, we also use standard ontologies as well as clinical concepts in value sets from across the medical record. So using those data sources that I mentioned to really help identify those clinical conditions. And Dannie will jump to that and to the use case whenever we look at heart failure. This, we'll on the right hand side of your screen is basically looking at heart failure. But then how we identify that from an AI standpoint? So we're looking at temporalities. We're looking at the children concepts or the parent concepts. We're looking at evidence of heart failure. So we're not just looking at that a provider set heart failure. We're looking at all the different ways to capture that and Dannie is going to jump through those as it relates to the clinical concepts. And then, we're able to identify the type of heart failure. We know we need the type, we know we need the acuity so we're able to identify those concepts as well in the provider documentation. So at this point, I'm going to turn it over to Dannie to begin to discuss some of the AI capabilities, the 3M has. (DESCRIPTION) Slide title, Artificial Intelligence. Two branches, deep learning and predictive analytics, feed into the machine learning branch. Three branches, translation, classification and clustering, and information extraction, feed into the natural language processing (N L P) branch. Two branches, speech to text and text to speech, feed into the speech branch. Two branches, image recognition and machine vision, feed into the vision branch. The branches, along with the expert systems and planning, scheduling, and optimization branch, ultimately feed into artificial intelligence (A I). (SPEECH) Thank you very much, Josh. So oop, this slide is very interesting to me, because oftentimes when we think AI, we think of Arnold Schwarzenegger fighting robots, Will Smith fighting robots or maybe Keanu Reeves fighting robots, maybe that's just me. But in truth AI is made up of many parts to create a whole artificial intelligence. And we can see NLP and machine learning here are driving artificial intelligence as well as expert systems and speech to text and other things that make up the whole underpinning of our NLU and AI. And so I want to talk in depth as we go on to the expert systems that support our clinical solutions products as we think about AI. (DESCRIPTION) Slide title, N L U engines. Slide text, Acuity Engine. Grammar-based engine that assigns acuity to findings. Acute onset, acute to subacute, acute on chronic, chronic, sudden onset. (SPEECH) So as Josh mentioned, we look at the whole concept in our natural language understanding and he highlighted heart failure specifically. So it goes beyond NLP or natural language processing, which our NLP is phrase based it's sort of small in scope and just at the sentence level while our NLU is concept based and surrounded by information models engines and more to really capture the context around that single piece of information. So as Josh-- if we quickly go back and we take a look at Josh-- Josh's slide there about heart failure and he mentioned it perfectly of capturing the children concepts or the parent concepts, the temporal-- temporality or any of these other surrounding piece of information we can really drive value by providing more context around that single piece of clinical information. So here, highlighted on this slide is we have over 20 engines in our NLU and I just want to talk briefly about the acuity. As we know it that heart failure example, you really need to capture whether that's an acute or chronic heart failure. And based on our NLU engine, we can run our documentation through the NLU and part of it is to piece out whether that is acute or chronic. And there's many other engines that make up that, that are displayed here. The lab engine that reasons over that, the clinical finding engines you can see them all here but this really continues to drive that context surrounding clinical information. (DESCRIPTION) Slide title, M*Modal Information Models (Mims). Surrounding Mims 15+ are nodes labeled findings, substance administration, action course, procedures, labs, and allergies. (SPEECH) So this slide is our MIMs or our medical information models which is also-- we also call them immortal information models, which is fun to be able to use that acronym. Our NLU has over 15 MIMs and they're pretty spectacular. So how I had to envision a MIM was as an empty shell that has slots that need to be filled for the NLU to reason over and generate some sort of output. So you can see in the square on the left, we've highlighted the medication administration model. And it has the slots that need to be filled for this to happen. You can see a substance needs to be mentioned, , a start time a stop time that it was given. So for example in our heart failure use case, if a provider were to document, we've given IV [INAUDIBLE] at this time. We can then begin to fulfill those slots. So we've got the substance [INAUDIBLE], we've got the time, an that it was given IV in order to really reason over this information. We have to have all of these pieces of information which are important and are driven by the value sets that we maintain. So value sets are groups of things that provide the clinical indicators that we need to fulfill each slot. So if we need-- for heart failure, for example, if we need to understand all of the medications that may be used to treat heart failure, we contain a value set of those medications that would be commonly found within those. And these MIMs really again drive that AI, drive that context around these individual pieces of information that we get when we reason over the clinical documentation. (DESCRIPTION) Slide title, Mims in N L U. (SPEECH) So this is a really high level visualization of how the MIMs are working with the NLU. And you can see that we have documents that come through that are either narrative documentation or structured data. They're transitioned into the NLU and then we semantically pull those out or syntactically, we process those documents to make them readable by the NLU. We reason over those with our MIMs, with our engines, all of that happens and those produce serialized objects. Those serialized objects are then fed into the applications so that's what the application understands. And the application does a translation in order to display information that an end user would understand. So I like-- this slide just to think about how data flows from one place, the EHR, all the way through to the applications to really provide value for our end users. And the next slide we're going to jump into is the back to Josh, and he's going to talk about our Content Governance and our Content Creation Processes around this information (DESCRIPTION) Slide title, Content governance. At the left is a circle with three slices: patient outcomes, end user experience, and industry leading content. (SPEECH) OK. So if we look at this slide that I'm presenting now when we go to the left hand side and we're looking at the smaller wheel, the buckets that we have here, end user experience, industry leading content, and then patient outcomes. So when we're talking about the end user experience that could be either the provider. So I'm receiving the information as I mentioned earlier real time while they're doing their documentation for that increased specificity that's required for that clinical concept on the CDI side using sort of that same framework but using the tools for helping identify those clinical concepts as well as prioritizing the encounters, which those clinical concepts are contained for them to be able to be captured. And then looking at the content, I'm going to jump into the content here in a second. And then really we're-- our hope is that we're driving better patient outcomes that we're getting the specificity real time and we're helping with the long term continuum of care. (DESCRIPTION) At the center, content methodology is surrounded by six triangles labeled, evidence-based practice, quality improvement, government regulatory, provider feedback, impact analysis, and industry regulations. (SPEECH) So if we jump into the center slide there with the symbols and then they sort of correlate on the right hand side with the detail. Let's start up at the top with the evidence based practice. So we're really using up to date information to help develop out our clinical concepts and nudges that will be presented either again to the provider or to the CDI in their workflow. We're looking at the quality improvement so we're using data driven analytics to really help drive the quality of that documentation. We're looking at the government regulatory. So looking at CMS, looking at coding clinic, we're using that information to make sure that our clinical concepts and nudges are remaining compliant. Provider feedback this is very important. Our team spends a lot of time on site with our customers to really get that provider feedback because we know today, that alert fatigue is real. Administrative burn is real. We want to be able to have the technology aid real time to save any backend workflows for the providers. Impact analysis. We want you to be able to gain value from using our technology and every customer defines what their value is, maybe a little bit different. But we are able to provide the impact analysis from the use of the technology. And then as I mentioned industry regulations, so the [INAUDIBLE] guidelines, we really do follow as we develop out our clinical guidelines-- our clinical concepts that are used in our applications. (DESCRIPTION) Slide title, Customization request process. Slide text, Adoption specialist assigned for life of project. Bullet points, Customer meetings with adoption, weekly, bi-weekly, daily as needed. On site visits as needed. Works with customer to determine nudges for go-lives and specialties. Submits customer requests (enhancements, issues, bugs). Tests with customer in product. (SPEECH) As we move on to sort of our customization request process. So we have our out of the box-- out of the box content that can be used but really our differentiator is that we do take customization from our customers. And there really is no limit there so you can bring new concept requests to us and we will develop them out, or maybe you have found that there is a clinical concept that you're interested in but maybe it doesn't meet your organization's needs. That is completely fine. We're able to develop that out so that it does meet your organization's needs. So for the life of the contract. We do assign an adoption specialist. And this is a subject matter expert that is primarily focused on the use of the technology. So the adoption specialist does arrange regular meetings with the customers. And this is a resource that is actually assigned at the beginning of the implementation and is really with the customer again through the life of the contract. So it's not like someone that is coming sort of mid midway in your use of the technology. It's really started at the beginning of the implementation in there with you. This is a person that we don't traditionally sort of change hands with or change out. We really want to focus on developing that vendor and customer relationship. So they're really basically part of your team in helping ensure that you are using the technology to its fullest potential. As I mentioned on site visits as needed, and this is really at our customers sort of request or expectation. It's not like we're going to be coming out every week or every month. But I think we're able to develop a cadence that would meet your needs and whether that be quarterly, or that be maybe twice a year, whatever the need is, we want to be sure that we're there to support you. The adoption specialist also works with the customer to determine what are the nudges act go live in the specialties that we want to focus on. We're going to talk a little bit more about our best practice in future slides and I can cover that in a little bit further. The adoption specialist is also sort of the customer voice. So we-- the adoption specialist is the resource that submits that request to our internal content team to begin to triage, and then to develop it out. And then, they're also available to test alongside of you as you're going through your process. Another resource that is part of the team is the content coach. And the content coach is there to support basically the adoption and the customer. The content coach is a subject matter expert as it relates to really the NLU in the clinical-- the clinical concept that Dannie has mentioned. And Dannie is going to discuss really her team and some of the content coaches background and really the makeup of that team. They're also there to triage your request and make sure that we are developing out as you expect. We want to get it right the first time but maybe we develop it one way and it wasn't the customer's expectation, or we determine that hey we're capturing many-- too many false positives or maybe we need to tweak it. Again as I mentioned, we really-- are NLU is very nimble in the sense of the customization. So the content coach is able to really help answer those questions or help guide the customer as to how we think that clinical concept really should be created or used. As well as discussing the content needs with adoption to basically best support the customer. The content coach is not necessarily 100% someone that is coming on site to do the necessary work, that is where the adoption specialist comes in. But the content coach is there in the background to help support adoption as it relates to how the content is being used. So at this point, I'm going to turn it back over to Dannie for her to discuss the clinical content team and the makeup. (DESCRIPTION) Slide title, Clinical content team. Above a bridge is text, medical providers document in clinical terms. Coding and compliance need specificity in diagnosis terms. Below the bridge reads, A C D I program creates aa bridge between this gap. Who builds the bridge? The clinical content team. 20+ variety of credentials (M D, Ph D, Pharm D, M S/M S N, M L S, M S W, R N, B Subtitles, C P C, C C Subtitles, C C D Subtitles, R H I A. Years experience, from 4 to 47. (SPEECH) Thank you, Josh. I love this slide because I get to talk about what is nearest and dearest to my heart and that is my team and our expertise. As you can see represented at the top of this slide, we see providers on the left hand side in purple who document in clinical terms. And then, we know that coding and compliance needs specificity in diagnosis terms. So a CDI program within facilities bridges that gap. And our application helps also to bridge that gap. And who builds this bridge on the NLU side here is the clinical content team. And we have over 20 varieties of credentials from doctors to farm deeds-- to nurses, CCDS, lab techs, informaticist. And we use our clinical background to create and curate this content. We also have a wide range of experience like years of experience, which I think is really fantastic as well from less than 1 year experience to greater than 15 years. So we have innovation and new ideas mixed with the wisdom of the people who've been working in these fields for a long time to really drive and maintain the value of that content governance that Josh was representing earlier. (DESCRIPTION) Slide title, Content workflow diagram. Arrows travel clockwise around a circle, which has slices labeled new nudge request; research guidelines, create nudge; review encounters; customize nudge; update engines grammars; Q A content gov; test N L U and repeat. (SPEECH) I'm going to move on now and talk about the process for the content workflow, what we do on our team. And this begins here with this green pie piece where we get a new nudge request. This may be a new nudge request, it may be a new enhancement. And this often comes from the as Josh mentioned, the adoption team who works closely with the customers to come up with a new use cases or enhance existing use cases to really drive value in their individual areas. So we get that request, we research the guidelines, create the nudge based on those guidelines that Josh mentioned [INAUDIBLE] up to date. Whatever clinical guidelines and CDI guidelines to verify the value of the request. We review encounters customize the nudges, update the engines and grammars in the NLU. We of course, go through a content, a QA, a quality assurance within and this drives back to that content way just to ensure that this each request is reviewed by at least two CDI content team experts, SMEs. And we also test this locally. And we repeat this process as needed. And highlighted on the right is that each of these requests have to come through with a release and we've highlighted our release cycle over on the right hand side. (DESCRIPTION) Slide title, Quantity Recommendations. Slide text, To avoid burnout for providers and C D I specialists, 3M has established the following best practices. Heading, Evidence sheets. Text, At Go-Live, 10 to 11 potential conditions from the approved conditions list. 30 Days Post Go-Live, Add 3 to 5 additional conditions. 60 Days Post Go-Live, Add 3 to 5 additional conditions. Heading, Nudges. At Go-Live: 3 to 5 specialty-specific groups. 10 to 11 nudges per group. 3 Months Post Go-Live, Up to 10 specialty-specific groups 10 to 12 nudges per group. For each nudge, the same condition should be enacted for Evidence Sheets. (SPEECH) So I'm going to hand it back to Josh, who's going to talk about some of the best practices and recommendations. So as I mentioned earlier and this is a question that we get all the time is basically, how many do we start with or how many conditions do we start with? As I previously discussed, we know that a ministry-- that the alert fatigue is real and administrative burden is there. So we want to be very pragmatic in how we roll out the technology. In every customer and every organization is a little bit different in how they want to approach it. And maybe it's we approach it one way because we think it's going to work the best way. And then, we find out that, hey, maybe the rollout that we did wasn't ideal and we need to take a step back and we need to re approach it. That's completely fine. The way that our technology is rolled out is on an end user basis. So we don't need to do Big Bang. For many of our customers, we don't do Big Bang. We actually do phases. So when we look at evidence sheet, then this is a workflow that is used in 360 encompass for the CDI team. At go live, we really want to focus on 10 to 11 potential conditions that are sort of approved from our list that we recommend. We've identified that the NLU functions well with these. They drive value. So this is what we suggest you start with. 30 days post go live, we can look to add in three to five additional conditions, and then 60 days post go live another three to 5. Again, every customer is going to be a little bit different. There's not necessarily a cookie cutter response here. Some teams react to evidence sheets a lot better and find them easier to use than other teams. So we're able to go at a faster pace. As well as I can't stress it enough. It is very important that we need some interaction and management from our customers as we roll out this technology. This is not a sort of install in drop piece of content. We really need active engagement in where we've seen the best success or with those customers that are actively engaged with us. So the evidence sheets are truly focusing on the CDI workflow to help capture those clinical concepts from encounters that they don't necessarily have to go review and sort of jot down on a piece of paper. The AI is doing that heavy lift for them and pushing that information to them in their workflow. As it relates to nudges on the provider side, this is where, again while we find that the CDI maybe has a little bit more tolerance for volume, we know providers really are a little bit more, I guess boisterous as it relates to how they perceive technology. So from a nudge standpoint, we really want to start with about 10 to 11 nudges per group that's maybe a little bit even on the high end. We tend to like the small-- start smaller and add in. But, then we want to focus on three to five specialty specific groups. So what that means is maybe we're not going to do a Big Bang approach. But for your organization, you've determined a Big Bang or focusing the same conditions. I guess, I would say for every specialty is the way to go. We have the ability to sort of cut those out. So maybe it is you want to focus with hospitalists, you want to focus with pulmonologists, and then nephrologists. You can define all those groups and then define what content or what nudges are to be enabled for those groups. So not every group has to have the same nudge enabled. Of course, they can. Maybe, you've determined from your organization. There's an initiative and you have to or you need-- you want to enable malnutrition for every provider no matter what their specialty is. You can do that. But we've also found that if you are focusing on specialties that most likely those providers that are answering those nudges are going to be able to add that specificity in and it's much more pertinent to the patient population in which they're treating. We have heard from providers. I don't know why I'm receiving this nudge type. This isn't a nudge that I would traditionally respond to or even be queried for so that's why we're able to break this out based off of specialties. I always urge customers, don't find yourself going down rabbit holes is you define specialties because it can-- you can find yourself in the weeds. So we widely want to focus on your high hitting specialties or your costly service lines or your high queried service lines as we develop out these nudges. Then, three to five I'm sorry,-- Three months post go live, maybe we increase the number of specialties as well as the number of nudges per group. Again, this is rolled out on an end user basis. So you don't have to roll it out all at once. You can focus on certain specialties. Once you get that under your belt, add in additional providers. It is completely at your pace. We can go as fast or as slow as you need to. So at this point, I'm going to turn it back over to Dannie to sort of complete out our areas of coverage from the content standpoint and then jump into that use case a little bit further related to heart failure to give you a real understanding of how our AI functions. (DESCRIPTION) Slide title, Areas of coverage. On the X axis of a bar graph are labels, such as neuro, eye, E N T, and respiratory. Each label has two bars, conditions and nudges. The Y axis ranges from 0 to 80. Every Nudges bar is significantly higher than the Conditions bars, which don't exceed 20. Some Conditions bars reach over 70. (SPEECH) Great. Thanks, Josh. We are going to dive deep here in just a moment. So this slide is a representation of our library and our areas of coverage. Organized along the bottom, you can see MDCs and you can see where we have conditions and nudges available. Light teal is conditions and dark teal is the number of nudges for that condition. So you can see where we have lots of high areas of impact with rules created. And you can see where we have areas where we can expand our content. And as Josh mentioned, while we did, we do have this focus in CDI. We appreciate any use case that this NLU and AI may be used really to drive whatever the facility's needs are. We're always looking to expand our coverage as we create and curate this content. (DESCRIPTION) Slide title, Heart failure overview. Heading, M D C 05 Circulatory System. Text, Condition: Heart failure. Nudge count: 9. Heading, C D I guidelines. Bullet points, Code to specific type and acuity. Specify stage of H F if possible. A G O/A H A classification used as reference. 3M coding and reimbursement references, coding clinics, A C D l S/A H I M A references. (SPEECH) So this is the heart failure condition overview. So under MDC 05, the circulatory system, we have a condition of heart failure. And within this condition, we have nine nudges that are created to capture different bits of information depending on the different use cases. And this may be a CDI workflow or it may be nudge workflow or quality or other sort of areas that we've had rules that we've created come through. We've highlighted where our CDI guidelines come from and our clinical guidelines come from specific to this condition. This is not an all inclusive list, it's just an example of some of the information that we look for as we go to curate these rules at this level. So we have up to date and the Merck Manual are some of the clinical areas that we use to do as references for building our clinical guidelines. And then as mentioned earlier, we use the 3M coding and reimbursement reference code in clinics [INAUDIBLE] as our CDI guidelines to know really how to capture content in those specific areas. We've selected a single nudge to review, to go just really down into the details of what it takes to build and maintain one of these rules. (DESCRIPTION) Heading, Nudge details. Subheading, Condition: bullet points, Documentation of HF, (+/-) evidence of diastolic H F, (+/-) evidence of acuity. Subheading, Requirement: Bullet points, Documentation of systolic/diastolic H F. Documentation of acute/chronic. (+/-) grade. Heading, C D I messages. Subheading, Rule Satisfied Message: Bullet point, Acuity and type of heart failure were properly documented. Subheading, Unsatisfied Message: Bullet point, There is documentation and evidence of heart failure but type and acuity were not documented. (SPEECH) So this is a heart failure nudge and you can see from the title alone there, we're looking for the documentation of heart failure, plus or minus evidence of heart failure, without documentation of the type and acuity of heart failure. So at first glance, what this is looking for is somewhere in the medical record, we've got the word [INAUDIBLE]-- in the encounter, we've got the word heart failure. And we also have some evidence of heart failure but we don't know the type or acuity of that heart failure. So in the nudge details, what it takes to trigger this nudge is that documentation of heart failure plus some pieces of evidence of heart failure. And then, what it takes to resolve this nudge is the very specific documentation of the type of heart failure and whether it was acute and chronic. We also have some information called out on the middle and right hand side of this slide with the messaging. And these messages can be displayed depending upon who is-- where in the workflow this is. As Josh mentioned, we have provider messaging and so if we're nudging the provider in real time to capture the specificity, we have that message there and those messages can be customized per facility and per whatever the hospital is trying to capture because those relationships, the CDI specialists have with their providers and so they know the type of messages to put in front of them. The CDI messages are a little more general. And that's because as Josh mentioned, we can put more information in front of them and they can use that to decide whether to link a query or prioritize their workflow based on these evidence sheets that are put in front of them. So now, we're going to move on to a really deep dive into how these rules are built. (DESCRIPTION) Three concentric circles. The innermost circle reads, condition. The next middle circle reads, requirement. The outer circle reads, provider message. (SPEECH) So you can see this diagram, we have CDI notifications, CDI opportunities, and provider nudges, and what it takes to create each one of those separately. So if we just have a condition represented there with the green box that would be a CDI notification. So this is when there's been some clinical piece of information that we want to get in front of the CDI workflow. And that is because it may drive value for how they're working on that chart. So for example, during the pandemic, we created some notifications regarding COVID. So this would be like your-- this patient here appears to have all these signs and symptoms of COVID and we just thought that you would like to know perhaps you need to investigate a little bit farther. To add a layer of depth to that, we also have requirements. So if you have a condition which triggers the rule and a condition plus a requirement, we're given a CDI opportunity. So these are rules that fire and then would be either fully documented or still an opportunity for documentation improvements based upon what information is found in the chart. And lastly, to provide that next layer is the physician message and turning it on within the nudge workflow to get that in real time that was sort of what differentiates a physician nudge from a CDI opportunity. (DESCRIPTION) Below the circles are three labels: value sets, concepts, parameters. (SPEECH) So here's the heart failure rule. And this is-- I got it. Very one moment. So what I talked about earlier was that we have to have that mention of heart failure. And you can see the presence of heart failure as the last bullet point. We absolutely have to have that within the encounter for this rule to fire. We also need some specific piece of clinical indicators that heart failure may be present on this patient. So represented here, we have the less than or equal to 40% ejection fraction. And we also have some evidence of BNP or proBNP greater than 500. And we also have some evidence of maybe that some heart failure medications were given. And it's interesting when you look at this condition because you can see that we have as Josh called it our out of the box, parameter of less than or equal to 40% in our ejection fraction. But we also have customer customizations where some people wanted less than or equal to 50% or even less than or equal to 55%. On this one, we're also looking for some temporalities which is part of the NLU AI. We're trying to see if it's past or present heart history of heart failure. And we have a customer constraint there to exclude past sometimes on these rules. So that's part of just the giving this rule to fire in front of the CDI specialist or the provider. We have to have these pieces of information. To fulfill it and make it a fully documented opportunity, we need documentation of the type of heart failure and we also need the documentation of the acuity of heart failure. We also include some temporality constraints and we can do customization constraints like we've done with this document type underneath the requirement. (DESCRIPTION) Heading, Value Set. Subheading, Heart failure. Bullet point, Snomed C T, heart failure disorder. Under the bullet point is a subset of bullet points, acute heart failure, chronic heart failure, right ventricular failure, left ventricular failure. (SPEECH) Each of these pieces of this rule-- this nudge are maintained with the curation of value sets. And SNOMED is one of the values-- SNOMED CT is one of the ontologies that Josh mentioned that we use to capture the clinical content for these roles. So you can see heart failure and all of its descendants that would help resolve this rule. (DESCRIPTION) Heading, Concept. Subheading, Systolic heart failure. Bullet points, Snomed C T, systolic dysfunction; Snomed C T, heart failure, systolic failure. (SPEECH) We also talked-- We also talked about concepts and this is something where-- this like we have a value set of these different types of heart failure that we were showing before. And now we have this specific type of heart failure, this systolic heart failure, which may be further model to capture more information. So systolic heart failure is made up of concepts of systolic dysfunction, as well as heart failure plus many synonyms, SHF systolic failure and even a German version of systolic heart failure there. So this type of work where we add synonyms or model synonyms is part of the daily work that the content team does to constantly curate and maintain these values that's to drive more value. (DESCRIPTION) Heading, Parameters. Bullet points, N L U temporality, past, present future; N L U experiencer, family patient, other; N L U certainty, certain, hedged, hypothetical, maybe, remote, ruled out, negative, undefined; N L U document type, clinical, lab, radiology, medication administration; evidence, ejection fraction less than or equal to 40%; customer less than or equal to 50%; documentation, acuity of heart failure; customer, O R right ventricular failure. (SPEECH) So if we're going to talk about individual concepts and the-- We're down to parameters. So we're going to talk about parameters and all of the ways that we tailor the NLU to capture that context we were talking about earlier. We can look for temporality and experiencer and certainty as document type and we reason over the encounter with all of this information to really provide that context. And again, here's another example of the ejection fraction less than or equal to 40 and customers who don't-- they want to be a little bit tighter. They want that ejection fraction to be less than or equal to 50. And this is the same thing of the acuity of heart failure. We have a customer who preferred that documentation of right heart failure-- right ventricular failure was enough to capture that. So we have this ability to really curate and customize these use cases at many different levels. (DESCRIPTION) Slide title, Encounter reviews. Slide text, Review encounters. Bullet points, Per nudge, per customer and across customers. Search for grammar, language and N L U patterns/issues. Disambiguation - acronyms most common issues. A table with columns category, cause, and comments. Row, category, circulatory. Cause, incorrect evidence. Comments, template issue: O2 triggering instead of flow rate. Row, category, respiratory. Cause, other. Comments, False Positive. Disambiguation: 'pe' 'lmmature granulocytes' - (pulmonary edema). Row, category, circulatory. Cause, context. Comments, grammar: D V T unlikely, suspected versus ruled out. Row, category, neuro. Cause, context. Comments, Temporality: History of the following complications - stroke, not picking up historical. Row, category, Respiratory. Cause, Language. Comments, Disambiguation: Possible P E, Pulmonary embolism, versus Pulmonary edema. Row, category, Kidney and Urinary. Cause, Language. Comments, Disambiguation: C K D client using as C C/K G/Day at ped hospital. (SPEECH) So I mentioned as a part of our process, we do encounter reviews. And this is where we really look at how the NLU is functioning within a specific set of encounters and a specific set of organizations-- organization. So we know that each facility and each provider and everything may document a little bit differently. And that's really where doing these encounter reviews provides value. So while we do these encounter reviews, we look at the NLU and how it fires and we find lots of different things. And we do this per nudge per customer and across customers. We're searching for grammar language and NLU patterns and issues. And we're also looking at disambiguation. Acronyms are a really common thing that we find and that we add to the NLU to really provide more context for individual customers. One of my favorite examples of something that we found during a review was down there at the bottom with kidney and urinary. So there was a facility, it was a Children's Hospital that had CKD documented all over their medical record. But what we know what all of us know is CKD means chronic kidney disease except for at this facility, it was most often use as CC per kilogram per day because it was a pediatric hospital and that's how they did their fluid restrictions for each of their pediatric patients. And so we had to create some disambiguation tickets and enhance the NLU to not fire any chronic kidney disease rules, nudges that may have also been turned on by this facility. We do things like that. And we have to look at that and then enhance the NLU and drive that value, not just for this single customer but all customers. So these encounter reviews are an invaluable part of our content curation and maintenance. (DESCRIPTION) The slide with the three concentric circles of condition, requirement, and provider message. Heading, Primary Care Exam Summary 01/01/2013. Slide text, patient has pulmonary edema, heart failure with election traction less than 40K and B N P 912. Currently taking 40mg furosemide. Heading, Primary Care Exam Summary 01/02/2023. Slide text, patient has been diagnosed with acute on chronic systolic heart failure. (SPEECH) So this probably looks familiar. But I wanted to talk about some clinical examples and how we do some testing to check that this rule is firing as we expect it to be fired. So here on the bottom, we have an example of maybe a note from a primary care exam. And you can see that it says, patient has pulmonary edema, heart failure with ejection fraction less than 40%, and a BNP of 912, currently taking 40 milligrams of furosemide IV, twice a day. (DESCRIPTION) Lines from parts of the condition point to key phrases in a primary care exam summary. (SPEECH) So if this piece of text were to be ran through the NLU, we can see which individual pieces here would capture for these different pieces of information. So we have heart failure, which I said was required for this rule to fire. We have our ejection fraction less than 40%, we have our BNP greater than 500, and we also have the 40 milligrams furosemide IV BID. Interestingly, it's not represented on this slide but in order to get that evidence of heart failure medications, we've had to fulfill that substance administration MIM. And we had to fill all of those slots in order for that to fire for this certain piece of evidence. And you can see, we have our dose, we have our substance, we have, how it was given, and we have when it was given to fulfill all of that in the medical information model in order to trigger that piece of evidence for this nudge. (DESCRIPTION) The lines disappear. (SPEECH) So if all of those are met, we can then move on to what it would take to resolve the rule or make the nudge go away. So if the document-- doctor provided patient has been diagnosed with acute on chronic systolic heart failure, we have now nudged them and we've said, hey, you said heart failure and you said that they had EF less than 40 and also they're on some medications, and their BNP is high, we've given that message that provider nudge message that's like can you please document the type and acuity of heart failure that you've said this patient has. (DESCRIPTION) Lines from parts of the requirement point to a phrase in a primary care exam summary. (SPEECH) And when we look at this, we can see which portions of this rule are now resolved. So we have the acute on chronic systolic heart failure. Acute on chronic of course, managing that acute piece that we need to capture and the systolic managing that type of heart failure. So we do this testing locally where we test our rules at the local level. And we also look at these within our encounter reviews to verify that things are triggering and resolving as we expect them to be. So I think-- Let's see. (DESCRIPTION) Text, Q and A. (SPEECH) I think that was all that I had to cover today. And we can open it up to questions. Great. Thank you both so much. There was so much information that you got through. So thank you so much. We do have a couple of questions. The first one I have is, do the MIMs review the MAR or scanned information of Drug Administration, or does it need to be written in a provider's note? So from the NLU standpoint today. And I'm talking about the functionality as it stands today, we would capture the medications as it relates to the provider documentation. This year and hopefully here in the next quarter or two, we will be adding in medication administration records to be able to capture that information outside of just provider documentation. So today, if you're using the technology or if you're implementing the technology, we would be capturing it from the provider documentation. But here in the near future, we will be looking to capture it outside of the provider documentation. The other piece I want to hit on not directly to this question but as we look at the data sources again, it is your provider documentation. It is discrete laboratory data and I sort of stress the discrete part because we are looking again across an HL7 interface. So we're not looking just at the provider documentation. Actually, we are looking at making a lot of changes where as it relates to lab results where we're looking not at provider documentation, we want to solely use the lab as the source of truth. So the discrete laboratory data and the radiology results. We know our customers have a lot more requests out there such as vital signs and flow sheets. And we want to get there but today, we are truly just looking at the provider documentation. OK, great. Next question we have is does this integrate with Epic EMR and let's just expand that to other EMRs? Yeah, so I don't want to focus on the word integration because we are not in the epic UI or we're not in any EMRs UI today. When we think of how we nudge providers, we sit on top of the EMR. So there is a control bar that's sitting on top of the EMR, not into the actual EMRs UI. We are partnering with our UI with our EMR vendors to see if we're able to enhance that but today, we sit on top. But we do take the information from Epic. This is both for hyperdrive and hyperspace. So if you are moving to hyperdrive here this year or even in the future, we are doing testing with some of our early adopters today as it relates to hyperdrive and we'll be able to support that. I don't want to limit ourselves as it relates to a specific EMR. If you are interested to know if we support your EMR, I was just reaching out to our 3M sales team and we can help that. But obviously, with those sort of larger EMRs into the industry today, Epic, Cerner Meditech expanse, we do all work alongside of those today. Great. All right, we have another question about the actual nudges. Does the HF scenario generate both a provider nudge and a CDI notification? Yes, it does. If it's turned on in both scenarios like Josh was talking about in the best practices, it will surface either fully documented opportunity or just an opportunity to capture heart failure for the CDI specialist. And it will generate a provider nudge, if we don't have that type in acuity already documented. All right, great. Another question we have is, can you create content outside of CGI? Yes. Let me take that. You want me to or you, sorry, Josh. Yep, you can. Go ahead. OK. Yeah, absolutely we're always looking for areas to expand our content internally as well as with partners in the community. We recently were given an example of a doc like an article that had some really interesting physician template that they do to do cancer screening. And that was brought to us as ways that we could maybe reason over the NLU you to provide some more information about these cancer patients and capture something sort of up front. And we're always looking for use cases like that to expand our content beyond just the CDI workflow. And Josh may have some good examples where we've done that in the past. Yeah, so I think an example where we have helped to identify patients in the past beyond just CDI is if you're a current 3M time customer, you have access to or I should say if you're currently using our engage one platform, you do have access to our content that is available. And you may see some of the conditions that are there, such as identifying patients that may require hospice care, a little bit sooner in their plan of care. So we do have different conditions available beyond just CDI. I Always tell customers don't always assume that we're not able to do something, please give us your use case because most likely we're able, to develop it out. It just may be dependent on what application, we may find that would it be the best suited for. All right. I'm going to go back to the nudges. Can you set it up so the nudged-- to nudge the provider based on specific sepsis criteria? For an example, sepsis-- the different types of sepsis and those types of criteria. We can create customized rules based on any criteria that's very clearly defined. So if we have a very clear use case of the sepsis three criteria and what would be expected to trigger and resolve in the pieces of evidence, we can work within our parameters to build a rule to do that. All right, great. Next question is does this encourage doctors to move these diagnoses to their discharge summary? So I think the important part here is obviously capturing the specificity real time while they're doing their documentation. So we have heard this request from customers in the past to say, hey, is there a way to nudge a provider when a condition gets dropped. So let's say that you have a patient that has a very high length of stay, they've been there for a couple of months. Something was documented early on in their length of stay and it didn't get carried forward to their documentation. Unfortunately, we're not able to capture that today. We're not able to say, hey, you've documented this specificity somewhere within the encounter and it didn't make it to this specific piece of documentation. Our hope is that if the provider is adding in the specificity that's required to their problem list and they're really using the problem list in a way to help drive the patient care, which I know we all know the problem list is a disaster and it doesn't always reflect the current patient's conditions. But whether they update their problem list or they're pulling some type of list forward in each of their notes, once the specificity is contained within the encounter, then our hope is that it would get captured to a discharge summary. But unfortunately today, there's no way for us to sort of nudge that provider that says, hey, you need to add this to your discharge summary today. All right. Another question. Does your program utilize APR DRG or mirror the VA's Alex Houser core-- Comorbidity Index, I can never say that word. [LAUGHS] So the important thing to keep in mind as it relates to our AI is we are looking at documentation quality. We're not looking at any type of financial impact. And again, this is feedback that we sometimes hear from, especially the CDI teams is this encounter is fully maximized or I wouldn't necessarily have queried for this condition based off of what they're seeing and NLU doesn't understand financial impact. We're really looking at the clinical concept and making sure that specificity is being captured. So we're not necessarily opening up a DRG workbook and seeing what do they map back to. We're really taking from our customers and from our sneeze, what is the use case to capture this specificity and then sort of, it just works its way through the process to say, OK, if you capture this specificity then you'll capture this the DRG that you may be looking for. We're not necessarily developing out content as it relates to a specific financial model. All right, great. And let's go ahead with one final question. So everybody has time to get on to their next meeting. Our last question for today. You mentioned customer quests, what's an example of some of those that you've gotten? So as it relates to customer requests. I will really say they are-- they're really all over the board as to really what we could be looking to capture or what a customer may be looking to capture. You could be bringing in to us specific use case as it relates to maybe a program or initiative that your hospital is focused on or you are looking at some type of data point that you're not capturing appropriately today. So let's say that you-- this may be a bad example or low value example. But you're looking to capture that any time that a patient has any type of bleed. Maybe you're not focusing on a specific type of bleed like a GI bleed or a head bleed but you want any type of bleed to sort of bubble up to your CDI team to have that encounter reviewed where we're able to take that information and develop that out. If you think of really how the information that we need-- we need A plus B equals C. As it relates to really developing out the clinical concepts. So if you can give us what you're looking to capture, either as the actual use case or you can just tell us what you're hoping to capture, our team with the different areas of sneeze that we have are really able to put something together and present that back to you to make sure that it's meeting your need. And Josh, we're going to ask you to circle back to the question prior, just some clarification about the-- they don't-- about the comorbidities, they don't currently use APR DRG. So they're not really asking from a financial impact question more of a quality impact, does that make sense? Yes, I mean as it relates to the quality or maybe you aren't using APRs, really our content. I mean I would say yes, you could use it in really any environment to help capture the needed specificity as well as if there's something that you're looking to capture or that we're not focused on or we need to focus on. We can definitely have those conversations and see what we can do in a way of partnering with our customers to really leverage the content to capture what is needed. OK, great. (DESCRIPTION) That's a wrap! (SPEECH) Like I said, this has been a lot of great information and we had a lot of great questions come in. So we really appreciate your time today. So a couple of the questions that did come in about the recording or if there is a recording, there will be. So after today's session, it'll take us a little bit of time but we will get this updated onto our website in the next couple of weeks. If you would like more information about these solutions within the portal, there is a Learn More button. Let us know if you would like some more information and we can certainly contact you for that. (DESCRIPTION) Slide title, 2023 3M Client Experience Summit. Slide text, The future is now. Let's go. May 22 to 25, 2023, Atlanta, Georgia. A description includes a venue at the Westin Peachtree Plaza Hotel in downtown Atlanta from May 22 to 25, 2023. Button, Learn more here. (SPEECH) And we also encourage you, if you are a customer, we would love to have you join us at our client experience summit in May in Atlanta. There is going to be a lot of sessions. And Josh, I don't know if you wanted to talk about some of the sessions that we would have at CES, but if you are a customer, we definitely encourage you to join us. Yeah, so at CES this year, we do have a lot of current customers speaking on behalf of their use and experience of using our AI in their different workflows as well, whether it be provider or CDI workflows. As well as this year, if you have any physician leaders that are interested in learning about our clinician solutions track, this is the first year at CES that we are going to have a provider focused sort of track related to the different clinician solutions applications in the AI technology is included in that track as well. So if you have physician leaders or any physician liaisons that are involved with your program that you think would benefit for attending CES, please reach out to the team. And let us as we are looking to have an interactive physician group as well as for the CDI and quality teams there are different topics as it relates to AI being covered, not only by 3M teams but also customers. Awesome. Thank you so much. And just a couple of the last questions that came in around. The certificate of attendance, you can use that. Download that out of the resources section. Once it ends and you complete the survey, you can't go back and redownload it. So take a minute just to download the certificate of attendance. And you can utilize that to request CEUs. This is not actually approved CEUs but you can utilize that certificate-- of certificate of attendance excuse me to request those at an accredited association. And again, we will have this posted on our website in the next couple of weeks. So again we really thank you for joining us today. (DESCRIPTION) Text, Thank you. (SPEECH) Please fill out that survey. We'd love to hear how we did. And we will be having another session or another CDI innovation webinar here. I believe it's the first week of-- first week of May excuse me. So be on the lookout for that registration and we'd love to have you join us again. So thank you both to Dannie and Josh. And we hope you all have a great day. Thank you.

      Webinar still image

      NLU, clinical content and documentation integrity: A closer look

      • March 2023
      • Join experts from 3M’s clinical content team for a closer look at the engine that powers solutions like 3M M*Modal CDI Engage One. Geared for a non-technical audience, this session will include an overview of Natural Language Understanding, including a review of clinical content rules and how they’re built. In addition, the session will explain the role clinical content plays in helping health systems take significant leaps forward in clinical documentation integrity—while simultaneously improving the physician-patient experience.
      • Download the handout (PDF, 1.8 MB)

    • (DESCRIPTION) Information slide. On24 Platform. A great company is showing what interesting applications a fantastic product can bring for motivated users. Media player, livestream, 320x240, Sides, 640x360, Resources, Have a question? Let us know here in Q&A window. Want to know more about our products? Ask and expert. Meet our speaker in Speaker Bio. We want to hear from you in the survey window! Copyright 3M 2022, all rights reserved. Title, 3M logo. 3M CDI Innovation Webinar Series. Boost your CDI program by leveraging impactful, quality-based prioritization. December 2022. (SPEECH) Good afternoon and welcome to our final CDI innovation webinar of the year. It's hard to believe that we are heading into 2023. So with us today, we have Stanford Health who will be talking about their CDI program. (DESCRIPTION) On24 Webinar Platform for a better user experience! Use Google Chrome and close out of VPN/multiple tabs. Check speaker settings and refresh if you are having audio issues. Ability to move engagement sections. Ask questions! Certificate of Attendance available to download for live webinar sessions. Engagement tools and CC available. Check the resources section. Complete the survey . The information presented herein contains the views of the presenters and does not imply a formal endorsement for consultation engagement on the part of 3M. Participants are cautioned that information contained in this presentation is not a substitute for informed judgement. The participant and/or participant's organization are solely responsible for compliance and reimbursement decisions, including those that may arise in whole or in part from participant's use of or reliance upon information contained in the presentation. 3M and the presenters disclaim all responsibility for any use made of such information. The content of this webinar has been produced by the-- 3M and its authorized third parties will use your personal information according to 3M's privacy policy (see Legal link). This meeting may be recorded. If you do not consent to being recorded, please exit the meeting when the recording begins. (SPEECH) Before we get started and I pass things over to our moderator today, just wanted to go over a couple of housekeeping items about the On24 webinar platform. It is a web-based platform so we do not have a dial-in number. So we recommend using Google Chrome, close out a VPN, multiple tabs that will help with bandwidth. If you are having any issues check your speakers settings. You can also do a browser refresh, that usually takes care of any glitches that you might be having. There are several engagement tools within the platform. In the media player, we do offer closed captioning so if you do need that function, it's in the media player. We also have a Q&A section so we do encourage questions throughout, so go ahead and put those in there and we'll get to as many as we can at the end. In the bottom left hand corner is our resources section. There is the certificate of attendance for today. So you can download that certificate and submit it to either AHIMA or Actis to obtain CE use, as well as, the presentation for today is also in that section. And then at the end, we always appreciate you letting us know how we did. There is a survey as well, and we'd love to hear your feedback. We are recording today's session, so in the next couple of weeks, we will have that available on our website. So if you would like to go back and listen in again, that will be on our website soon. (DESCRIPTION) Boost your CDI program by leveraging impactful, quality-based prioritization. Mark LeBlanc, CDI Manager, Stanford Health Care. Michelle McCormack, CDI Director, Stanford Health Care. (SPEECH) So let's go ahead and get started. Adriana Harris from 3M is going to be moderating today. And she's going to welcome our speakers. Adriana? Yes. Thanks, Lisa and thanks, everyone, for joining today for our webinar entitled, Boost Your CDI Program by Leveraging Impactful Quality-based Prioritization. (DESCRIPTION) Slide, About our presenters. (SPEECH) Our speakers today are Mark LeBlanc and Michelle McCormack. Mark has 40 years of health care and 17 years of CDI experience. His MBA in healthcare administration and his vast clinical experience as a registered nurse assists him in supporting the CDI team to meet their personal, professional, and organizational goals. (DESCRIPTION) He has extensive experience in change management and holds a Healthcare Lean Certificate. He is active in A C D I S, H F M A and A H I M A. (SPEECH) Michelle has been the CDI director at Stanford Healthcare since 2013. She earned her associate's in nursing, her bachelor's in nursing, and her MBA in healthcare management. She has experience in CDI dating back to 2005 following her clinical nursing experience in a vast and a variety of specialties and settings. (DESCRIPTION) She has led successful CDI departments in academic medical centers, community hospitals, and multi-hospital systems. Michelle is also a former National A C D I S Advisory Board Member and a current National A C D I S Leadership Council Member. She holds certifications in CDI, Coding and Revenue Cycle. (SPEECH) So with that introduction, I'll turn it over to you two to give us some background on Stanford and then we'll get into our questions. Great. Thank you so much for having us today. Like you said, we'll jump right in, right, Mark, with talking a little bit about Stanford. (DESCRIPTION) Infographic, Stanford Health Care. Stanford Health Care seeks to heal humanity through science and compassion, one patient at a time, through its commitment to care, education and discovery. Stanford Health Care delivers clinical innovation across its inpatient services, specialty health centers, physician offices, virtual care offerings and health plan programs. The only level-a trauma center between San Francisco and San Jose. Life Flight transports 500 patients annually, 49 operating rooms, 613 licensed beds, 67 licensed ICU beds. 371 solid organ transplants in 2017. Kidney transplant patients, 100% 1-year survival rate in the last 2 years. 1, 970 heart transplants performed with a 92.7 percent 1-year survival rate. Admissions, emergency room visits 77,425, Discharges 27, 167. 1.8 million outpatient visits systemwide in 2018. Mission, to care, to educate, to discover. Vision, healing humanity through science and compassion, one patient at a time. Stanford Hospital, 500 Pasteur Drive opened for patient care in 2019 with 824,000 square feet of space. Our people, 14,143 employees, 2,902 medical staff, 3,194 nurses, 1,412 residents and fellows. 98.4% of SHC physicians have a star rating of 4.5 or higher. 93.4% of SHC nurses have a BSN, MSN or Doctorate degree. Translators & Interpreters. We offer Spanish, Mandarin, Cantonese, Burmese, Russian, Vietnamese and American Sign Language and access to a many as 200 languages through phone interpretation. 8 all time Stanford Medicine Nobel laureates. 28 dogs in pet assisted wellness PAWS program. Over 1,000 volunteers provided 62,800 hours of service. Awards & Recognition. Stanford Health Care was first designated as a magnet hospital in 2007 and was re-designated in 2012 & 2016, submitting document this year, 2020. Magnet recognition is a prestigious award developed by the American Nurses Credentialing Center, A N C C, to recognize health care organizations that provide nursing excellence. Fewer than 7% of US health care organizations achieve this honor. Vizient Quality Leadership Award 2019 Winner, ranked in the top ten percent for both inpatient and ambulatory care. The Stanford Stroke Center is designated as a comprehensive stroke center, providing the most advanced and rapid stroke care for patients nationwide. Best hospitals US News & World Report Honor Roll 2019-2020. Leapfrog Top Teaching Hospitals 2019. Named one of the nation’s best teaching hospitals by the Leapfrog Group, a top health care watchdog organization that evaluates providers based on rigorous quality and patient safety standards. Stanford Health Care is part of Stanford Medicine, a leading academic health system that includes the Stanford University School of Medicine, Stanford Health Care, and Stanford Children’s Health, Lucille Packard Children’s Hospital. Stanford Medicine is renowned for breakthroughs in treating cancer, heart disease, brain disorders and surgical and medical conditions. (SPEECH) So Stanford is a major academic Medical Center, a level I trauma center. We do all transplants. The only services, I think, we do not offer are, we don't have a burn unit. So that is the one area, I think, that we don't have. We have a lot of residents. You could see lots of residents and fellows. Lots of research. We have eight all-time Stanford Medicine Nobel Laureates and I think actually that might be nine now, because I think we had one this year. And then we're always looking at how we compare to other organizations. So we've listed some of the awards and recognition. That's something that's really paramount at our organization as well. Mark, anything I forgot about us? Yeah. As well as part of our Stanford Healthcare group, we do have a facility and we call Tri-Valley over in the East Bay which is a community-based hospital and they do a variety of-- they have OB. They have PEDs and Neo, as well as, the standard community-based hospital. So we have a little of both. (DESCRIPTION) Org chart summary. Nine CDI Specialists at the top of the chart break down to six CDI line leads below that. Mark LeBlanc is the Manager with two CDI quality and outcomes leads under him. Next on the chart is Michelle McCormack, who is the Director. (SPEECH) Yes, good point. So this next slide is a little bit of an eye chart. But I know it's near and dear to Mark's heart so I'm going to let you walk through this org chart. Mark? Yeah. I think, for me, over my career and especially in the last 10 years or so, it's become really obvious to me for change management, and for innovation, and being able to set a team up for success, setting a culture is really important. And so Michelle and I have similar visions and so we were able to create this upside down org chart. And it's one that we're very proud of. We are at the bottom. And I can proudly say I enjoy being down there. And because I feel like we are there to support all the team that's doing the work. And if they're all successful, then we become very successful as a whole and as a leadership group. So we have a couple of provider champions, one, at the academic medical center, and one at our community-based facility that helps support us. And we work very closely with our coding department. And we all report up through revenue integrity, which is part of the revenue cycle family at Stanford. We have a couple of quality and outcomes leads that we utilize for a lot of the work around mortality PSIs, and HACs, and other projects to improve our outcomes. We have two education leads which help support providing education for the team, as well as, individual staff members, as well as, bigger educational initiatives across the organization. I think one of our keys that we've learned over the years is, we have service line leads and these are for people who have specific service lines that they support and provide education direct to providers. They help provide data. They help partner on improvements and documentation directly with those service lines. And then finally, we have our CDI specialist team which is a phenomenal group of frontline specialists that do all of our prioritization reviews. They do the majority of our queries. And they are definitely advanced in the type of work that they do. And all of that makes up our CDI program here at Stanford. We also have a career ladder, which goes along with this as well. Michelle? Yeah, great. Yeah, I think you covered everything. The only other thing I would add is that, our two physician leads also are supported by our volunteer physician champion. So we have at least one volunteer physician champion from every service line that also partners with those two providers. So I think that support has been really helpful too. Want to talk a little about our Encompass journey, Mark? (DESCRIPTION) Slide, 3M 3 60 Encompass System Journey. 3 60 Encompass Go-Live November 2018. Passed on prioritization, One work list, Specialist with unit assignments, Sorted by units, Shared accountability, Specialized reviewers, Only covered specialized united, PTO coverage by team, Final reviewer, Completed final CDI review, Validated impact of all queries on case. Logo, Journey, a path to success. (SPEECH) Sure I spoke at the summit this year and I think, I introduced myself as the person who said, no way to prioritization in the very beginning. So back in 2018, we went live with our 360 Encompass. And prioritization was there as a function and they explained it to me. And really tried hard to get me to be an early adopter. And I will say that I passed. I just felt like we needed to be able to-- it was a big change for the team to go on 360. And we were really trying to hone in on our work. And at the time, they were all unit based assignments. The team was very siloed in the type of work that they did. And so we were trying to get just moved to a more team approach. And I was struggling with the live prioritization offered at that time. And I think we all can see, over time, as they improved the product, it definitely became a much better thing for teams to use. And we really wanted to promote shared accountability. So we were trying to figure out how to get the team to be more involved together and own the work as one and not own an individual part of the work and feel like they weren't part of the whole team. Anything on this? Yeah, I really just wanted to let you tell on yourself about how critical we were of prioritization and really were the person who was the most cautious about it. But I think it was very beneficial. I think you worked a lot with 3M on your concerns. And you did a lot of investigation beforehand. And I think that's one of the reasons why you are probably a really good reference for that system because you're a convert. You were not really on board at the beginning. So I appreciate you telling on yourself about that. (DESCRIPTION) Text, What are you leveraging to maintain productivity while asking your teams to do more with the same amount of staffing? (SPEECH) [CHUCKLES] Yeah, and I think, 3M, when they listened, and then they went back, and when they came back and tried to offer it again, all the concerns I had were gone. That they had actually gone back and worked at all of those and more. Yeah, great point. That's great. So we'll start with our first question. What are you leveraging to maintain productivity while asking your teams to do more with the same amount of staffing? (DESCRIPTION) Slide, Setting the stage. Culture. Quality focus, Continuous quality improvement, Collaborative focus, Trust in the systems, Transparency metrics, Accountability focus, Continuous feedback, Utilize notifications. Change management. Trust, Goal setting, Education, Support, Open Communication (SPEECH) That's a big one, right? I think that [CHUCKLES] that is something that's going to echo across every organization, every company for that matter, to do more with less. And I think we have a great team who is really good at being creative and sharing their thoughts on how we can have improvement in our processes and in our technology. And I think Mark has done a really great job of creating that culture and enhancing it with his leadership. And so I would love for you, Mark, to talk about your strategy working directly with those teams. Yeah, I think, in the beginning, it was time, as we said, we're all there to help support each other. But each level would then provide support in its own way. But the people that we want to make sure are the most successful are our frontline staff. And so I think they realize now that that's what we're all doing behind the scenes and on the org chart, the way we've developed it. And so we're always listening to feedback from them. What do they think could help improve their daily work. We're very transparent with the teams about the metrics and where we are. And where we, as leaders, see potential areas that we are concerned about and get their feedback on what they think might be some of the issues or barriers that they're facing that we're not hitting those metrics. And that's really been a big thing is the transparency and developing that trust. We do set goals. And those are always something that, I think, most people enjoy having some goal to be able to test what they're doing against that goal and seeing what it is. Education is a big part of our program. And, in fact, we're missing our education for this morning that are collaborative with our coding partners. And so we do that. And we provide education for the staff. We encourage them. And we've been very lucky that we've had the same staff for five-plus years. And so that's a huge-- I have to admit, it makes doing all of these things and asking for more easier because people have been around and they want to always be challenged and learn. So I think that's been a really good thing. Yeah, I agree. And I think, especially, as we move to a prioritization, reinforcing this and being really thoughtful about how you include everybody really led to our success story with it. (DESCRIPTION) Graphic chart, Start. 3M 3 60 Encompass System Standard to Prioritization. Organizational initiatives ensure that the system was being used to the maximum capacity. Team decision to move from specialized to generalized specialist. Leadership desire to make sure that resources were being utilized efficiently. Organizational focus on quality. Organizational focus on provider satisfaction. (SPEECH) So let's talk a little bit in detail about our journey with prioritization initially and how we took our vision, as you said, we have this big vision, and tried to tweak the prioritization to meet those needs. I think you're right that we started with those organizational goals and the initiatives that were in place to test the system and that those were the first areas that we focused on. But I think, you were pretty strong in your opinion that we start with the 3M settings and not do a lot of customization right off the bat. But you had to set up our specialists in the way that they looked at their work a little differently to make this successful. And I know that move from being specialized on a unit or specialized with a certain service line was something you and I were careful about. But in the end, it was the team that said, we want to move to that anyway, right? That was something they had wanted to do. Yeah, I think it was really key. Well, first of all, 3M said-- I said, I want to talk to some of your customers who've gone on prioritization and hear what-- I don't want to make the same mistakes. And the person that I spoke to was awesome. We spent a lot of time on the phone with her team. And then she said, one thing I will tell you is, go out the box. Take the settings out the box. Try not to make too many tweaks or personalized settings for your organization unless there's something really big and pressing. And then, continue to work on honing in on where you want to make those changes. Because you said, we did too much in the beginning and we didn't really know what we did impacted what the outcomes were. So we really took that advice. And I can tell you over the last few years, that's really helped us. And when we make a change in prioritization, it's obvious. We see that the difference in that. How it's impacting our work. And yeah, I was really surprised. I have always been a-- I always liked generalized. I always enjoyed having different types of patients and different types of reviews. But it was the team, as we talked about, the functionality of prioritization that they actually saw, well, there's no way we could have one work list, and use prioritization points, and still keep it separated. But we made-- we did that. And we have provided them an avenue where the specialized people-- so if I had a case on a neuro floor, I knew who the neuro person was from the past. And I could reach out to that person and say, hey, I need some help understanding how to really do a good review on a neuro case. And I think giving them that freedom and encouraging them to use each other really built them as a really strong team. And they are on, what we call-- well, they are on chat all day long, helping each other, asking questions, posing things. And it's really amazing to watch them to do that work. I agree. Yeah, it's been really fun. And I think it's also-- just using that prioritization and having those discussions about what we're going to change and what we're going to add, has really helped them understand on the frontline what impact all of those different quality indicators and all of those measures. That was a big focus for our program from the very beginning. But I think it was hard to get that to really filter down to the frontline staff and to learn all the details. And that tool helped us nudge them a little bit more down that path. (DESCRIPTION) 3M 3 60 Encompass System Standard to Prioritization project kick-off 2020. Aggressive timeline. September kick-off, November 1st soft go-live, Mid-November team training, December 1st team go-live date. 3M standard (out of the box), Minimal setting changes. Super users, Proficiency in 3M 360 Encompass, Understand the "big picture". Start. (SPEECH) Oh, yeah, for sure. I mean, they are already, now, doing PSI and HAC reviews concurrently when they fire something. Because the team, the frontline staff do assign codes. And they do assign a DRG. And they do know what's firing, what's not firing. How to work on it. They'll send queries if needed. And so that's been really fun to watch them grow and doing even more broad reviews. Yeah, definitely. I just want to point out on this slide, we talked a little bit about how we took the 3M standard out of the box. But I wanted to address the fact that once you decided to go live, you didn't waste any time, right? We're going to get this here. We're going to put this in. And we're going to start to use it. I think the other really important aspect of this, and really any technology that you're going to integrate into your process is, really to identify some super users. And you went with the frontline staff. And you said, hey, we need some super users. Who's interested? Who wants to do it? And those super users are really the people who drive these changes. And they're on the calls with 3M. And they're making decisions or recommendations to 3M about what we need to do and then bringing it back to you. And that, I think, is really empowering for a team who understands their goal and how they connect to the organization. (DESCRIPTION) 3M 3 60 Encompass System Prioritization Journey Begins. October 2020, System build, Workflows documented. November 1, 2020, Soft go-live with super user group, Daily check-in, Weekly 3M/IT meeting, Utilized old worklist and new worklist. Mid-November 2020, Team education, Super user led. December 1, 2020, Team go-live, Command center, Daily check-in. Quote, The journey of a thousand miles begins with one step, Lao Tzu. (SPEECH) Yeah, they're constantly helping us with prioritization. And they've become stronger. And we meet monthly and review requests that come in around issues that people are having and see if we can change prioritization scores to move cases around, and move them up, and change initiatives, and stuff. So yeah, they're-- and it's amazing because I don't say a whole lot on the 3M calls anymore. They really lead the conversations with the technical people, and ask the questions, and they come prepared. It's great. Yeah. And I know we added this slide here just to give a little bit more detail in case folks were thinking about how to go live with prioritization. What are the steps, the specific steps we took, and the timeline. I think, the command center was a really interesting thing we did at go live but I don't think we really needed it. I think we had it there but nobody joined. And we were just sitting there chatting with each other all day, which is fine. I mean, that's the go live you want, right? One where you don't need to intervene. And I think the soft go live with the super users was key, as well, because those people, they looked at the old work list and the new work list, and they validated that the new work list was working. And then when we turned off the old work list, they could tell the team, nope, we need to do it. It works. We've been doing it for a month. And we see it. We know it works. So that was another key, was giving them the opportunity to validate that we weren't going to be missing things in the new world. Yeah, great point. All right. And our next question is, how did the CDI and coding departments get a seat at the table for the discussion regarding quality outcomes? (DESCRIPTION) Slide, Engagement Strategy, Meet them where they are, unmask the models & algorithms, analyze for all opportunities. (SPEECH) Oh, yes. We're still trying to get seats at the table. No, it's a challenge. I think the bigger your organization is, the tougher it is to understand all of the places you need to have a seat at the table. There's lots going on. And a lot of overlapping efforts. And so, I think, one of the big things we did was really-- we had already been working with our quality team on specific reviews. And so really working with them to understand where their leaders were at. What did they understand about the quality of outcomes we were focusing on. And how we, as CDI and coding and documentation, how those all fit in to that. Once we figured out where they were at, we did a lot of work to unmask the models and the algorithms. We were very open with that. We showed them how the different areas, different codes impacted the models and our scores. And then we analyzed the data, of course, but we analyzed it for all opportunities. So in our organization, as a CDI department, we are focused on complete and accurate medical records. So when we touch a case, even if we're touching it because we want to look at a specific outcome, we are looking for all opportunities. We see every medical record as an educational opportunity and learning opportunity. And I think that approach has been eye-opening for some of our quality leadership who may have a focus on a few different quality outcomes or the meetings about a specific outcome. But we're bringing up other opportunities, as well. I think that's been really helpful for us in terms of engagement. And we have to repeat this a lot. We are years into this now. And I think, we're still reminding them about how the models work, and what are the barriers that we, have as an organization, outside of the documentation. (DESCRIPTION) Steps to Success. Data analysis and validation. Need to agree on the performance measurement outcomes, Need to trust the data and have a process for validation. Goal and Messaging Alignment. Alignment of goals, SMART - Do we agree? Mutually beneficial - W I F M, Balance everyone's needs - patient-centered and mission-minded. Performance Transparency, Dashboards, Distribution and Access. Advocacy, Think big! Inset box, Patients over Paperwork. Reduce unnecessary regulatory burden to allow providers to concentrate on their primary mission: improving patient health outcomes. (SPEECH) So steps to success for all of this engagement with our quality partners. I think, Mark mentioned our transparency. That is one of our mantras in CDI, as well as, just, we're going to analyze the data. We're going to be really open and transparent with it. And we're going to validate it. We're going to validate the impact that we have. The big, and I think, one of the most challenging aspects was for all of us to get on the same page about goals. What should our goal be for different aspects. And really talking about who is responsible for those different elements. So obviously, documentation, impacts, expected outcomes, and exclusions, right? But we didn't want to leave out the fact that we do probably have opportunities to improve some of our care quality. Or there's other data we need to look at to see why certain things are happening from a clinical perspective. And we needed to start to trust that data so that we could have that conversation. We could align those goals. And then, advocacy is another-- oh, it's such a-- been such a challenge with COVID and the pandemic. But we really started to make some big steps in terms with our advocates with CMS around some bigger coding changes, some challenges that providers have from a documentation perspective. And a little bit of progress there, but now, it's been a bit on hold due to all of the pandemic. And all of the focus on the clinical work, which absolutely needed to happen. That was the right thing to do. (DESCRIPTION) Going Beyond Traditional CDI Efforts. RCC Improvements and Reporting, Integrated documentation tools and strategy, Ongoing since 2017 (RCC - managed by CDI). Use has become a largely consistent and standard practice for providers, Meaningful use is strong and has influenced improved capture and performance and reduction in queries. Admission Status, Process, standardization and governance, Project in progress with updated completion goal of 4/30/21. Provider Experience, Technology optimization, Reduce provider burden. CMS Advocacy, Pathology report and pressure ulcer code capture guidelines - Potentially Industry Impacting (SPEECH) Yeah, and I would say also, with our quality partners, that partnership really has grown over the last couple of years. And just getting them to understand all the work that's been done by CDI and coding around reviews, and queries, and capturing, and multiple second level reviews in certain instances. I don't think people understood the amount of touches that sometimes some of these cases get. And it was eye-opening for them. And I think it helped them to understand the work we do and how it is supportive of what they're trying to accomplish as well. Yeah, absolutely agree with you. I think we also wanted to call out some of the things that we did that were a little bit outside of what we would maybe term traditional CDI. And taking those steps with the quality teams and the quality of leadership really helped to build that trust. So one of the areas-- so we are benchmarked Vizient. We're a Vizient member. And so one of the aspects that was a big concern was the accuracy of admission status. And so we led the efforts to create a process standardization, a governance for admission status. And that was a big lift for us. And really, we had to engage a lot of other people that were not CDI. They were not coding. We were pulling leaders from the clinical side, from PSS to create this governance. But very successful. I think the other thing we've done a lot of work on is our documentation tool, the .RCC. And that is a tool that takes a lot of time for our service line leads, that is a preemptive documentation tool to help the providers select conditions they might not think about documenting on their own. And then the other aspect was just our partnership with the providers, the more we work on all of these tools, the more we partner with other departments and on other initiatives, the more burden there is to potentially add to the provider. So we also rolled out a provider experience metric. And we have a big effort and a big focus on technology optimization for the provider. So I think those efforts really went a long way to partnering with that quality team. All right. And our next question, how do you get your frontline staff to incorporate quality into their daily assigned tasks and workflow, thinking beyond the basics like financials SOI, and ROM? (DESCRIPTION) Circular Graphic, PSI/HAC Review Integration. Multidisciplinary pre-bill review of PSI/HAC cases, A E S edit workflow, Focused education regarding PSI/HAC and exclusions, Staggered rollout of concurrent PSI and HAC reviews by CDI staff, 360E tools, Ongoing feedback and accuracy scores for CDI staff, notification processes in 360E, Organization focus on quality. (SPEECH) It's easy for us here. But the way we did it was, we made a project out of wanting to incorporate PSI and HAC reviews, and moving it from a retrospective type effort to a frontline concurrent effort. And so we created the workflows. We had input from the specialist. If we had dates, go live dates, we had lots of education. We rolled it out one or two PSIs at a time so that they could learn how to-- You notice, the tool, actually, tells us when it's firing and alerts us. So it was just a matter of people learning how to incorporate that. Just like we're reviewing edits. Also looking at when PSI and HACs are firing and being able to look at that work. And yeah, we give lots of feedback. Yeah, I think, for me, been a little bit more removed from the group, I think, just seeing their interest in this. And just how they wanted to learn more. They wanted to learn more about what are other things we should be looking at outside of just the PSIs and HACs. And what else are we looking at? And how does Healthgrades work? And how does-- they're really, now, curious. And that curiosity is fun. It's also tough for us, as leaders, because they push us. They push us really hard to take them to the next level and to provide them with these new areas to explore. So really fun for us to watch that happen. And also to acknowledge all the work that the quality and outcome leads were doing prebill. There was a lot of discussion about how-- wow, you review all of these. And you must have sent a lot of queries on some of these because I didn't realize that this was an exclusion. I think that was fun as well. And how has effective collaboration been created between those teams, CDI, coding, and quality in establishing aligned goals and criteria? (DESCRIPTION) Graphic, Multidisciplinary Collaboration and Goal Setting. Accuracy is in the middle circle surrounded by record, performance outcomes, code capture, reimbursement. (SPEECH) I think it's a good question. I think it's something we keep trying to grow and enhance. I think the way we were able to get everybody focused is to put the accuracy of the medical record at the center, at the heart. And then, talk about how we are all coming at it from different perspectives. But the key is still the accuracy. The accuracy of the medical record. The accuracy of the code capture. Making sure that the patient has an accurate medical record for continued care. And then, everything else flows from that accuracy. So our performance outcomes should fall where they're meant to fall if everything is accurate in the medical record. And I really think we still have times where we are challenged and we disagree about things. And then we just to come back to that accuracy. And if we have a question, I don't know what you think, Mark, but I feel like we have a really low threshold for queries. And I feel like, as leaders, we are often like, yeah, just send a query. I don't know. We shouldn't argue about it, just send a query, right? [CHUCKLES] Yeah, I mean, I think that has become the mantra a lot of times. It's like, after enough people have talked about it, if we're all having this long of a discussion, then somebody else is-- another set of eyes is going to have the same question. So just clarify it, right? Just get the query out there. And I think we had a luxury in that we sat on the same floor as our quality folks so they were in cubes not far from us. And so there was a lot of shared discussions just because of where we sat. But I think that going live in 2018 with 360, we included them because we wanted to have a multidisciplinary process within the system. And so they were there with us during training. They were there as we developed what that workflow would look like, what their part was in doing that workflow. And so it really, especially, since the pandemic and we all went home, it really has made that develop that collaboration a lot stronger and easier. Yes, that's true. I'd forgotten about that. It's been a while since we've been in the office. But yes, that was really helpful for us at that time. And I think, also, it was interesting to me to learn that even folks who have really focused on quality for a lot of their career, didn't understand the details of the coding behind it. Because they're not coders, right? They're looking at the clinical quality perspective. But they are very interested in learning about that. And they have become experts about that as well. And so that's been really fun for us to share with them as well. And what are some of the shared KPIs (DESCRIPTION) CMS, mortality, etc (SPEECH) amongst the specialties, like CDI, HIM, and quality that you're using? (DESCRIPTION) Operational Metrics. Review Rate, Review Timeframe, Meaningful Reviews. Query Rate, Concurrent vs Retro, Meaningful Responses. Match Rate, Final CDI Review Impact, Reason Code Definitions. Accuracy Rate, Code set definition, Clear, aligned expectations. Query Response Turnaround Time, Defined escalation process, Shared Accountability. Multidisciplinary Reviews, P S I/HAC, mortality, AWOP and other internal. Multidisciplinary Reviews (SPEECH) Yeah, so we broke this up into one side for operational and one side for more outcomes focused. I would say that all of these are really shared goals between all of our departments. Although CDI may have more accountability and responsibility for different metrics, and coding may be more focused on different metrics, or it may be a result of direct result of their work, I think we are all keeping an eye on all of these. We have very open communication. Our dashboards are published. Everybody can see them in the organization. They can go to the website and look at them. And I think just being transparent about that has helped people adopt some of these metrics as shared metrics for them. I think, one of the really important efforts that I just want to call out again, Mark, I'm going to put you on the spot here is, the code set definition in our accuracy rate. So I think one of the areas where we found our opportunity was in our matches or match rate with coding. And one of the things that we started talking about pretty early on was the fact that our expectation was deeper than the DRG. And how do we establish or change that, I guess, from a DRG match rate to an overall match rate. An overall match, and what does that mean, and how do we define that. Do you want to share a little bit about that journey? I know we're not quite done with that journey yet but you want to share your thoughts on that? Yeah, I totally agree. I mean the system will just take us out of the DRG level. But the final CDI review process they do, the CDI specialist is accountable for looking at the entire final code set as compared to what their code set was. And if they see discrepancies in POA status, or they see codes missing that they feel meet the definition, then they are encouraged to raise that question with the coder and have a discussion to see if they can get some of those things resolved. And we have an escalation process that goes on up through all the different levels if need be. And we're starting to see more and more, and we're moving more where the coders are actually sometimes is initiating some of these. I'm going to-- I think this should be POA, or no. Or I don't think this code meets the definition. So we're continually working on both sides. And it is a process and something that we'll continue to work on. The other one that is the escalation process for our query response. So the organization has a two-day expectation for responses. And CDI, the service line leads to all of the escalations of queries that aren't answered in that two-day period. Because they have the relationships with all the providers. And so right now, they do all of that. And then they notify whoever the query author is when they response or when it's responded to. And so it helps provide some efficiency around that as well. And we look at it every day in our huddle. How many of our cases are holding up the final coding process so that we make sure that we're not losing sight of keeping those moving forward. So yeah, a lot of shared things that go on. Yeah, that's a great point to add that we have a DNB goal because we didn't put that on here. But we do report out-- excuse me, we report out all of the cases with open queries or that are in a mismatch review, CDI is responsible for that. And so, I think, that has been really helpful as well to build that partnership with coding to recognize and acknowledge the importance of that DNB metric as well. I did, also, want to just touch on the fact that we have these multidisciplinary reviews. Our quality reviews, PSIs and HACs, and other quality outcome reviews include our quality and outcome leads, our coding quality specialists, and our quality improvement analysts. So our quality team, as well, as you mentioned. We do coding and CDI prebill mortality reviews on 100% of our mortalities. And then, we have an AWOP, which we will talk about a little bit more in one of the slides coming up. But that is our internal analysis, so where we find potential areas of opportunity and we do some internal multidisciplinary reviews. And those reviews include our medical directors as well, which, I think, has been really interesting for us. (DESCRIPTION) Performance Outcomes. Benchmark and Comparison. Expected Mortality, Expected Length of Stay, CC/MCC Capture, Other Risk Adjustment, Complication Rates. Internal/Year-over-Year, Financial Impact, Case Mix Index, Query Rate, CC/MCC Capture, S O I/ROM (SPEECH) And then from a performance outcome perspective, you see a lot of the traditional case mix index, CC/MCC capture. But we're looking at them both internally year-over-year, as well as, through many different comparison and benchmarks. We don't focus in CDI and coding on one performance metric or one benchmarking. We use 3M's benchmarking. We do a lot of work with the PBM report. We look at Vizient benchmarking. We look at the CMS compare. We look at all of these different comparison groups. Because that helps us identify where there's an area that's at higher risk of being a concern. If we're just looking at one, we found that we were having a lot of false positives, I guess, is what we would call it. Whereas when we took the bigger global perspective, we were able to weed those out by looking at it from those different perspectives. Anything here that I'm forgetting because you did think about the DNB goal on the last slide? No, I think you covered it all. All right. What continuous improvement practices and tools have you incorporated to maximize outcomes while maintaining appropriate staffing levels? (DESCRIPTION) Optimization and Ongoing Process Improvement Efforts. Advanced Sequencing, Utilizing Code Comments to identify impactful codes, Care Quality (Mortality) Review Process, Query Reconciliation Process, Automated Query Impact, Other Query Impact utilization, Utilization of Organizational Outcomes, Internal Review Process (A W O P, PDM, CDI Accuracy, Other), Prioritization point recommendations (SPEECH) Oh, this again. This is Mark's-- this is his baby, again. All of the performance improvement and process improvement in QA. So I'm just really proud of all of this that has been developed. And I'm going to let you brag a little bit, Mark, about it. Yeah, I think part of, as we went with prioritization and we continue to make sure that we're utilizing the system to its maximum capacity. I think that's the advanced sequencing functionality that we've been an early adopter and continue to use. We did a-- coding was wanting to know more about some of our Vizient outcomes. and so we were able to create some code comments around potential impactful codes that have a note in the system so that they can start to see, oh, I wouldn't have thought that would be impactful, right? And also making sure that people are looking at all the other drivers as well. We instituted the Care Quality review process. So that's been a big help because we had created this very convoluted and robust notification process that just seemed to be overwhelming at times. And now, with this whole, since 3M has built this into the system, it seems to be much more efficient and works very well and quickly. The query reconciliation process and this automated query impact is awesome. We still have other query impact. But we actually-- we do an audit every month. And the staff are really great at making sure they get all of these impacts set right in the system. So we're enjoying the automated piece as well. And then we do do a lot of stuff around, like she talked about AWOP, PBM, CDI, accuracy. And we continue to get feedback around where other people, through some of these internal reviews, think that we should be looking at these cases. And then we look to see how do we get that DRG higher up into our work list, when identified. Yeah, a lot of great work. A lot of effort by our super users. They've been really vital partners for us in all of these efforts. So yeah, lots of great work, especially, in the last year or two. Well, on those same lines, what's on the horizon or the next thing to be on the lookout for within your CDI program? (DESCRIPTION) Post-it note, What's Next? Text, The Journey Continues. Enterprise Workflow, New SSR Prioritization Reports, Continuous Analysis of Prioritization, Super-user/IT bi-weekly meetings, Real-time provider facing A I, Query delivery and response optimization, Diagnosis Auto-population project (SPEECH) Oh, well, what's not on the list, right? The more we do, the more the organization sees it, and the more we're asked to do. So we have a lot of efforts, actually, partnering really closely with 3M. We do, now that we have our super users who are consistently asking for enhancements, and changes, and adaptations. And I think that has been really fun. And I think the enterprise workflow beta that we're doing now is going to be really fun. It's not live yet. We're in the midst of it. But that's going to be exciting. And I am a data-- I'm a data bug. Everybody knows I like to look at reports. And so we're waiting how we can look at those new prioritization reports and the new reports for the care quality mortality reviews. And we are, of course, looking at provider facing AI. A lot of people ask us about that, have we integrated that into our program. And are we going to-- we are looking at it currently. We're also looking at our query delivery and response processes to see if we can optimize that or make that easier for providers. And then we have a lot of diagnosis auto-population that we're doing internally with our electronic medical record. So we're building in our medical record some criteria that would auto-populate some diagnoses for validation from the provider. So we're looking at all of these tools to work together, right? That provider facing AI, the real time nudges, our auto-population, as well as, our other documentation tools. Mark, anything else that you think we may want to share about what's coming? No, I think that's a lot of it and the biggest stuff that we're working on. And I think we're very lucky that we have a lot of providers that are researchers, as well. And they're very tech savvy. And they actually think we might be moving too slow. So they push us really hard, so. [CHUCKLES] That's true. [CHUCKLES] I think that's going to be our big is to how to meet their needs this year around the AI and stuff, so. It's always a balance, right? We're either moving too fast or we're not moving fast enough. So yeah, that's interesting. (DESCRIPTION) Save the date for 3M Client Experience Summit. When: May 22-25, 2023. Where: Atlanta, Georgia. What: 3M CES is the premier event for clients of 3M H I S. Go to our website for updates and to subscribe for more info! Interested in speaking? Call for proposals are now open until Jan. 13th! 3M CDI Innovation Webinar Series. Copyright 3M 2022. All rights reserved. (SPEECH) Well, that was the last of our questions for you. Mark and Michelle, I want to thank you for your time today. If the audience has any questions, they can throw those into the Q&A. But I appreciate your time and the audience's time today. And I just want to turn it over to Lisa to do a quick update on the 3M Client Experience Summit. Yeah, thank you all for joining today. That was a great presentation. And like Adriana said, thank you to all of our-- all of our attendees today. So if you are a current 3M customer, you may have gotten information-- hopefully, you got the information about saving the date for our upcoming Client Experience Summit. It will be taking place in May this year, May 22 through the 25 in Atlanta, Georgia. We are going to be moving out of where we normally go in Salt Lake City. We're going to try Atlanta. And hopefully, that'll be easier for people to get to. I know it is for me being on the East Coast. So I definitely can give a thumbs up to that. And again, that is for our clients that are interested in attending. So if you are interested, there is a link in the resources section to go to our website, where if you didn't get information, you can certainly subscribe to get updates. And if you are a thought leader and you'd like to talk about the experiences that you've had at your facility, we'd love to hear if you'd like to speak at CES. So the call for proposals is open now, from now until January 13. So take a look at our website for more information. Again, today's webinar was recorded. It will be on our website in the next couple of weeks. The October webinar, I believe, Adriana, wasn't that Piedmont? I believe that was-- Piedmont House, yeah. Yep. So if you would like to go back and listen to our October webinar, please also take a look within the resources section. Go ahead and download your certificate of attendance before you close out or complete the survey because you won't be able to get back in to download that if you complete the survey. So, yes, download the certificate of attendance, the presentation, and be on the lookout. We are busy planning our 2023 CDI Innovation Series. So hopefully, in the next couple of weeks, we'll be sending out some information so you can start to get registered for the webinars that we have next year. So again, cannot thank Stanford for joining us today. It was a great presentation. And again, thank you to those that joined and have a great holiday season. And it's crazy to say, but we'll see you next year. So thank you all for joining today. Yeah, thank you so much. Yeah, thank you so much. (DESCRIPTION) Text, That's a wrap!

      Webinar title slide

      Boost your CDI program by leveraging an impactful AI-based prioritization

      • December 2022
      • It is no secret that health care is undergoing a drastic transformation impacting the CDI profession. There is a disconnect on what can be controlled through documentation that often exists between quality, CDI and coding teams. By leveraging AI-based CDI prioritization technology, Stanford Health Care ensures it utilizes prioritized worklists that focus on the most impactful cases down to the DRG level. This focus can be customized and manipulated with the transforming health care environment. Learn how Stanford Health Care tackled this challenging landscape through proactive, connected tools and a culture driven by quality.
      • Download the handout (PDF, 1.3 MB)
    • (DESCRIPTION) Slide presentation. Logo, 3M, Science, Applied to Life. Text, Taking Piedmont CDI to the Next Level for the Win! 3M CDI Innovation Webinar Series. October 2022. A man in a white coat and a woman in blue scrubs sit together at a table looking at a tablet. (SPEECH) Good afternoon and welcome to our October CDI innovation webinar. Before we get started, I am going to go over a couple of housekeeping items. And what we'll be talking about today is taking Piedmont Healthcare CDI to the next level for the win. We have a couple of great speakers here today. So we're really excited to have them. Before (DESCRIPTION) New slide. Text, On24 Webinar Platform for a better user experience! Use Google Chrome and close out of VPN/multiple tabs. Check speaker settings and refresh if you are having audio issues. Ability to move engagement sections. Ask questions! Certificate of Attendance available to download for live webinar sessions. Engagement tools and CC available. Check the resources section. Complete the survey. (SPEECH) we get started, I just want to go over a couple of things. This is a web-based platform so make sure if you are having any technical issues, make sure you're in Chrome, close out of VPN or out of multiple tabs. That'll help with bandwidth. And a lot of times, if you just do a quick refresh, that will help with any problems that you might be having. Because this is a web-based platform, we do not have a dial-in number. So you will want to use your computer audio. So again, if you are having any issues, make sure you check those settings. Because this is a new platform, I just want to also go over some of the engagement tools that you have. So in the top area, you have a Q&A box. So if you have any questions, we encourage questions, please put that into the Q&A box. We'll get to as many as we can at the end. Down at the bottom left, you should see Resources. So that is where the certificate of attendance is for download. You can also download the presentation from today, as well as a couple other resources. If you missed our August CDI webinar, a link to that recording is in there as well. In the middle, you can see an area that if you would like some more information, if you click on that, you can let us know there. And then, if you are interested in learning more about our speakers, there's a speaker bio section. And then, we always do appreciate for you to complete the survey at the end to let us know how we did. So also, one final thing, if you do need closed captioning, that is available in the media section of your dashboard as well. (DESCRIPTION) New slide titled Meet our speakers. Headshot photos of each speaker. Text, Gail Higle, BSN, RN, CCDS, Manager of Clinical Documentation Improvement, Piedmont Healthcare. Niki Spear, BSN, RN, CCDS, Manager of Clinical Documentation Improvement, Piedmont Healthcare. (SPEECH) So again, like I mentioned, we have some great speakers today, Gail and Niki, from Piedmont Healthcare. So I'm going to go ahead and turn it over to Gail to get things started. Gail? (DESCRIPTION) New slide. Text, Objectives. (SPEECH) Good afternoon. This is Gail Higle. Our talk today is about how Piedmont, in 2020, took the-- our CDI department to the next level for the win. And as most of you know, our Georgia Bulldogs are number one, and they won the national championship last year. And CDI has a lot in common with working together as a team. And so, we want to tell you how we, as a CDI team, became successful using Priority and Impact ROI, our wonderful 3M technology. The objectives of today's talk are to focus-- our whole reason that we started using this was to focus on CDI reviews on the most needed cases, to maximize the use of worklists, prioritize needed follow-up reviews, benefit from the AI auto-suggested codes and queries, easily reconcile cases with coders' final codes for accurate financial impact, educate on inaccurate reconciliation and missed opportunities, and report vital CDI impacts to administration and each individual CDI. The impact tab was started after Active was in Atlanta several years ago. CDI, at their national active convention, asked that each CDI, they wanted to have their impact. They want to know what each query they get Impact makes, and this wonderful Impact ROI, the reports you can build out of SSR does just that. (DESCRIPTION) A new slide with the Piedmont logo in the corner shows a photo of a tall and long building with curves slightly. (SPEECH) This is Piedmont Healthcare. This is our newest building in downtown Atlanta. This is-- the building was opened August 2020. It is a 408-bed facility, 16 storeys in the heart of Atlanta's Historic District. There are 16 ORs, eight cath labs, four cardio-physiology labs. It is-- also has an urban Plaza with a Starbucks, a 300-car garage, and it's very high tech with our world-renowned cardiovascular surgeons. If you would like to, you can go on YouTube and take a tour of our wonderful facility in downtown Atlanta. (DESCRIPTION) New slide. Text, Piedmont. Real Change Lives Here. Piedmont has more than 31,000 employees caring for 3.4 million patients across 1,400 locations and serving communities that comprise 80% of Georgia's population. Piedmont has provided $1.4 billion in uncompensated career and community benefit programming to the communities we serve over the past 5 years. (SPEECH) Piedmont, tell you-- I'll tell you a little bit about Piedmont. It is the largest health care provider in the state of Georgia. We currently have 22 hospitals, 55 Urgent Care centers, and 25 Quick Care lotion-- locations, 1875 Clinic physicians practices, and more than 2,800 Piedmont clinic members. And just this last year, in 2022, Piedmont was ranked 166th as one of the Best Large Employers in the US by Forbes. And we are very proud of that. (DESCRIPTION) A new slide shows photos and years built of different buildings: Atlanta 1905, (1957 location), Fayette 1997, Mountainside 2004, Newnan 2006, Henry 2012, Newton 2015, Athens Regional 2016, Rockdale 2017, Walton 2018, Columbus Midtown 2018, Columbus Northside 2018. (SPEECH) Now, this-- our study that we did began in 2020. And in 2020, these are the 11 facilities that Piedmont had. What is interesting about this is we had 11 facilities with seven integrations in six years. So we are consistently changing and growing and growing. (DESCRIPTION) New slide shows more building photos: Macon Coliseum 2012, Macon North 2021, Cartersville 2021, Eastside 2021, Eastside Logansville 2021, August 2022, Augusta Summerville 2022. (SPEECH) And in 2021, we acquired five more facilities. These five facilities joined in June of 2022 in 3M and are now part of our CDI program. And then, in-- also, in 2022, we integrated with Augusta and Augusta Summerville, which is the University Hospital for the Bulldogs. And so-- and we will integrate with them in 3M and Epic next year, in November 2023. So we look forward to that. So Piedmont continues to grow. (DESCRIPTION) New slide titled Piedmont CDI shows a photo of the University of Georgia Bulldogs football team together on the field. (SPEECH) A little bit about Piedmont CDI. We are celebrating 10 years of being together. It started in 2012. We began with two facilities and four CDIs reviewing only Medicare. By 19-- 2019, in July, we grew to 11 facilities, a director, four managers, an educator, more than 35 CDIs reviewing cases using the length of stay, working DRG priority that is in 3M 360. So all payers, no self-pay or charity, no OB, no Peds, and no NICU. In October of 2019, before COVID, we went 100% remote. And this is very, very important because that asset helped us get through the rough times of COVID. Starting April of 2020, CDIs began reviewing using the long length of stay, working DRG priorities for all admissions, except OB, Peds, and NICU. (DESCRIPTION) New slide. Text, Piedmont Case Selection July 2019 to October 2020. CDIs Assigning cases by L O S and 3M Working D R G Priority. Filter by L O S choosing cases greater than or equal to 3 days. Next select cases by working D R G priority in the following order: 1, symptom Dx/D R G, 2, medical cases without CC/MCC., 3, surgical cases without CC/MCC, 4, surgical cases with CC without MCC, 5, sepsis D R G's review, 6, review D R G, consider alternate D R G, 7, questionable admits, 8, medical cases over GMLOS, 9, elective surgery over GMLOS, 10, low priority cases, minimal change impact, 11, optimal D R G, no need for review/re-re-review. A section of a chart shows Active Priority Factors and Working D R G information. (SPEECH) And if you are not familiar with what 3M working DRG priority looked like prior to the priority list, this is what the hierarchy of the working DRG priority looks like. (DESCRIPTION) New slide. Text, Welcome to the Game. A photo from a UGA football game with the opposing time about to snap the football. Text, March 19, 2020, First Piedmont COVID-19 admission. (SPEECH) Then, lo and behold, hit COVID. March 19, 2020, the first Piedmont COVID admission happened. And our stable team, at that point, we had no integration for the year. And this gave us an opportunity to use 3M technology to focus on CDI reviews that most needed reviewed. And Piedmont administration gave CDI the goal of reviewing 80% of admissions. And you cannot review 100% admissions. So at that point, our director, Lori Dixon, who is very instrumental in leading our team using technology, brought us together and had Niki and I, who, she will talk about Priority. We would start using Priority, Worklist, and the Impact ROI right in the middle of the five COVID waves. Our peaks for April 2020, July 2020, January 2021, August '21, and then January 2022. And right in the midst of that, we began using both at the same time. (DESCRIPTION) New slide. Text, Taking Piedmont CDI to the Next Level amidst COVID-19 Waves. October 2020, Priority and Impact ROI launched together. Two side by side screenshots, the left a table labeled North Priority Worklist with a long list of illegible items. The right screenshot shows a dashboard with tabs along the top and a lot of information with a popup box on top. (SPEECH) To get started, this is what our priority looks like-- looks like on the left. And on the right, this is what Impact ROI looks like. And Niki is going to tell you now about her beginnings with the Priority Worklist. (DESCRIPTION) New slide. Text, Priority Worklist Launch. Practice makes perfect! Prior to system launch: Set up the game plan. 3M defaults for prioritization points, established regional superusers, trialed different functionality, modified CDI workflow for the Priority Worklist, priority superuser team chose the layout of the worklist columns, added focus DRG priority for sepsis. Priority factor weights: New documents: OP Note, DC Summary, Queries. Financial class. (SPEECH) Thanks, Gail. So with the increasing challenging of staffing, we adopted prioritization as a tool to improve case review efficiency. With the goal of reviewing 80% of all adult inpatient admissions, except mother-baby, the 20% that we could not review should have the least likelihood of query opportunities. Prior to implementing prioritization, we attempted to do this manually, as Gail said earlier, with the list that she had shown. However, old workflows and habits are hard to break. And many staff would gravitate to cases that they preferred, such as reviewing by service line, length of stay, or, as some staff would call it, cherry-picking. This results in inefficiencies in the review process. Using prioritization worklist as a tool in customizing to our needs, we would help sequence the cases from high priority for query potential to the lowest without having to manually sort through the worklist. CDIs could just take the next in line per case and review it. To implement the Priority Worklist, we've decide to practice with a soft launch. If something did not result with the end result we had in mind, we could adjust, make changes, and improvements that would help prepare for the overall system launch. We established regional superusers. We used prioritization, as a tool within 3M, that allowed for a lot of customization. We started with the 3M default and adjusted from there. I would recommend to assess what works best for your facilities. For example, sepsis has a high potential for denial so we set up a focus DRG for sepsis cases to review for clinical validity and associated and organ damage. We also found that adding priority factor weights for a few-- new document types was a great tool for CDI. (DESCRIPTION) New slide titled Priority Worklist Launch: Game Time. A photo of the UGA football team in a huddle on the field. Text, At October 2020 department meeting, priority worklist manager and priority superuser team presented priority worklists to staff and educated staff on prioritization and new features. Second workgroup came together to create a daily workflow job aid. Additional education using 3M tools and filtering. Region priority worklists were implemented following meeting. Priority worklist manager continues to validate worklist and educate staff. (SPEECH) Game time. Priority Worklist launch. So at the October 2020 department meeting, we presented a new priority worklist and educated staff on prioritization and new features. Regional priority worklists were implemented following the meeting. We had a second education session using 3M tools and filtering about a month after the initial launch. We also had another workgroup that came together to create a daily workflow job aid to assist the staff. We continue to work to validate the worklist while offering ongoing support and education to the staff. (DESCRIPTION) New slide titled Piedmont CDI Regional Priority Swim Lanes. A screenshot of the 3M CDI Dashboard for Gail Higle. It shows a color-coded key, purple for prioritized, green for ready, gray for scheduled for today, red for queries pending, blue for scheduled for later and orange for discharged and pending. Below is a horizontal bar chart labeled Visits. The bars are labeled with priority worklists for various locations. Each bar is divided into colors, each with a number on it to correspond to its length. (SPEECH) This is the CDI dashboard work queue divided into four regional swim lanes. There is one manager per region with 9 to 10 CDIs reviewing. This does not include a guest that is to be integrated into Piedmont, Epic, and 3M next year. (DESCRIPTION) New slide titled Piedmont Priority Worklists. A screenshot shows a chart titled North Priority Worklist with blurred out information. The columns are: Visit ID, Patient name, Score, Case status, last review date, assigned to, last access, available documents, pending queries, provider queries, follow-up, notification, priority and working D R G, Wt/GLOS/SOI/ROM. (SPEECH) So we sort the priority worklist by unreviewed cases and start reviewing from the top. We customized our worklist to have the priority score, then case status, last review date, assigned to, the last access to the chart. We customized which available documents to have. We included a column for the number of pending queries, number of query, the names for the queries we sent to the providers, any follow-up data if it was assigned, notifications encoding, additional priority that we may assign individually to CDI. And then, also, on the end, the auto-suggested working DRG. These columns were customized by our superuser teams. And it was very helpful for them. (DESCRIPTION) New slide titled New Features. Text, At a glance, see how many queries need followup. A screenshot shows a closeup of the Pending Queries column from the worklist. Above the chart it says 6 pending queries. The choice CDI Query Status Pending has been chosen from the Priority Factor dropdown menu. Text, Who last accessed account. A screenshot of the Last Access column shows different names in each row. Text, Case status and last review date. A screenshot of the Case Status and Last Review Date columns. The Case Statuses shown are Discharge and Concurrent. (SPEECH) These are some of the new features we shared with the staff. The first column shows the ability to sort by priority factor to quickly see how many queries are pending, which is also helpful if we have staff out and are covering for each other. In the middle, you can see who last accessed the account. This is useful in determining if coding has a chance to review the case. And on the last column on the end, there is a case status that shows new, concurrent or discharged, and also the last review date. It shows in green if it was reviewed today. (DESCRIPTION) New slide titled Priority Scoring showing two screengrabs from the Home tab of the dashboard, the left one labeled Ability to dismiss factor. It shows the Priority Score, 310, and the Visit State: New. A blue box appears around Possible Sepsis, 30. Below the written statistics is a line chart labeled Priority Score Progression showing the Findings and Priority Score from 3 PM to 3 AM. The screengrab on the right side is labeled Action Items. It shows the same information, but the Action Items section at the top reads, 1 Open, 1 Total. A blue box appears around the heading and the text, Open. Actual Result Codes not found in Final Codeset, Immediate action is required. (SPEECH) Priority scoring. To dive a little bit deeper, you can see additional priority scoring tools within the encounter. On the left is the ability to dismiss a resolve factor. When CDI reviews for possible sepsis and then decides whether or not to query, they could dismiss the factor which will move the priority factor from the scoring of that case. The priority score pertains to just the new information, documentation or status change to give the most up to date priority score to assist with which case to review next. The open actual item on the right side of the page shows the missing query response from the final code set. This creates an alert to the CDI for the missing code and helps prevent re-billing. Before, this is a manual process of preparing codes. But now, CDI is notified of the missing query response in the final coding. This improves efficiency with the time spent in the chart and helps reduce errors. (DESCRIPTION) New slide titled Workflow Changes. Text, Assign and complete initial case review one at a time by priority score. No longer assigning 10 to 12 cases when you sign on, only assign the one you are working on. No longer required to assign followups for all cases. Only assign followups as needed, for specific reasons and not for routine scheduled followup. Worklist will move the cases with the highest priority to the top of your list. (SPEECH) The workflow changes the two biggest workflow changes that we have were assigning cases one at a time by priority score and not scheduling routine follow-ups for all patients. The process now is to assign cases and complete initial cases one at a time by priority score, no longer assigning 10 to 12 cases when you sign on in the morning. Cases are continuously updated in real time. So the cases on the top of the list are most likely to need clarification. In the past, when new documentation came in, it would go unnoticed until the CDI would manually review. Using the prioritization as a tool gets CDI to the case most likely needing a review without having to manually review each case before changes. And this reduces unnecessary or non-value-added reviews. We no longer assign follow-ups in all cases. The CDI only assigns a follow-up for potential clarification and use of prioritization as a tool to alert the CDI when a new review is needed. With continuous updating on prioritization scoring, we don't need to spend the time to follow-up on all cases looking for changes. We now use a combination of technology and CDI expertise to improve reviewing efficiency. (DESCRIPTION) New slide titled Priority Ongoing Improvement. Text, Obstacles. Questioning change: Increased autonomy in setting reviews using clinical expertise and 3M tools to enhance review efficiency. Only creating followups as needed. Choosing cases by priority score and not picking preferred service line, DRG, short stays. Regional differences: surgery hubs, sepsis cases. CDIs working different hours caused differing case loads. CDIs working from 4 AM to 10 PM, live across the US in different time zones. Of note: Asked staff to escalate cases that appear not to have a correct score. Validated all cases as having correct storing by priority settings. (SPEECH) Some of the obstacles we encountered were questioning change, needing to reinforce the new workflow, there were regional differences that we'd have surgery hubs, sepsis cases. There was ongoing improvement to the priority. CDI can now check between 30 to 40 total cases per workday with some staff taking upwards of 16 to 17 initial cases. This is the success we really have to own to the staff and our-- the workgroups that worked with this. They were instrumental in getting priority going. It's important to encourage the staff not to revert to old workflows and to assign follow-ups for all cases where they will be buried in a sea of red, overdue follow-ups. We found with routine scheduling of follow-ups many would not get reviewed before discharge, and the act of scheduling follow-ups was inefficient resulting in many clicks to set up and then resolve the follow-ups upon reconciliation. The workgroup also noticed the active-- potential query opportunities that the CDI recognized when all of the scheduled follow-ups-- all follow-ups were scheduled would be one amongst many set of follow-ups and would likely be missed. So when the CDI wonder, will I miss something, they are now using their clinical expertise to assign follow-ups only for a particular reason, allowing them to get to the cases that they really need to be reviewed. This gives the CDI increased the time in setting review. Of note, we also did ask staff and to help create that buy-in and support to escalate any cases that did not appear to have the correct score. And we were able to validate all the cases of having the correct score by the priority settings that we chose. So that is what I have for prioritization. One last thing with that is I would encourage you to play with that prioritization. It's a little bit of a tinkering tool. It's customizable to whatever comes up. If you want to do reviews for sepsis, we're doing a new travel project. So we were able to set up worklists based off of that and this incredible tool we use, which is something that you can tinker with. That is what I have. So back to you, Gail. Thank you, Niki. After Niki had the part of the meeting, this was one department meeting that we launched this in together as the department. And at that point, we were all remote. So this was one, large department meeting online through Webex at that time. We now use Teams. And after she presented her priority worklist that the team put together, then at the same time, right after she spoke, I spoke about the Impact ROI launch. (DESCRIPTION) New slide titled Impact ROI Launch. Text, Impact ROI manager presented at October 2020 department meeting; Impact ROI education highlighted. Query scenarios for Missing Diagnosis, New Principal Diagnosis, Clinical Validation and POA. Impact ROI reconciliation steps, including open action item for uncoded query responses. CDI scorecards to display individual CDIs information: PDX, MCC, CC, Procedure, SOI and ROM Impacts and accurate financial impact. Impact ROI Tab implemented after department meeting. Additional Benefits and Support: Regional manager validation worklists save managers time validating impactful cases concurrently. CDIs case reconciliation is concurrent before the bill drops not at the end of the month. Impact ROI manager provides ongoing education at department meetings and after feature updates. Ability to submit 3M enhancements to improve reporting of impacts to administration and CDI scorecards. Managers continue to troubleshoot cases with errors, including missing and incorrect impacts and escalate to 3M the unresolved issues. (SPEECH) And of note, right after this meeting, it was turned on in 3M. So when everybody went back to work after this meeting, there they had worklist, and they also had their case is going to Impact. And to tell you about our lunch, after-- when we started, the way I did it was I went to 3M. In 2020, they had updates 7-- .7 and 20.8 and I used those updates to create a PowerPoint to educate the CDIs at that meeting. And then, after the meeting, each CDI got a copy of that PowerPoint to use so when they were reconciling their cases, they understood how to do that step-by-step. And some different query scenarios on how to reconcile the cases using the Impact tab. We did missing diagnoses, new principal diagnoses, clinical validation, and POA. And those examples are all on the original 20.7 and 20.8 updates that 3M did. Then, I will also show you-- we went through the Impact ROI reconciliation steps, including the open action item and the un-coded query responses. And then, I will also show you how we created a CDI scorecard that gave each CDI all of their credits for their queries for PDX, MCC, CC, procedures, SOI, and ROM, and the accurate financial impact. And I'll talk about that in a minute. The Impact ROI tab was implemented right after the meeting, right as we started the worklist also. The big benefits of doing the Impact ROI tab is that our regional managers can validate the worklists and the cases as they are completed instead of at the end of every month. So this saved us a lot of time. But for regional managers, it was in real time. And because of that, CDI cases were concurrently reconciled something with the code or before the bill dropped. And this also is saving Piedmont a lot of time getting those bills out the door instead of at the end of the month and then putting up the red flags when things-- bills are held. The Impact ROI manager, which was myself, I provide ongoing education at department meetings. Any time there's a feature update, the quarterly 3M feature updates, we do further education. If it is big education, it will be part of our department meeting. If it is just small education, it will go out in an email this-- the morning after the update. Lists of the update are sent directly to the CDI so they can see the cosmetic changes or whatever changes that 3M has in that feature update. The ability to submit 3M enhancements. This was huge. As we started building more reports to send to administration or add to scorecards, there were more fields that we wanted to offer. Geometric length of stay was one of those. And that enhancement was put in. And within a couple of months, that field of ability to put geometric length of stay on our administrative KPI was given. And so, that is also very helpful. Managers, this is the big part. Managers continue to troubleshoot cases, even today. We were doing one that was a POA query that a CDI couldn't get or impact. So the managers, we worked together, we looked at that. And including missing-- and the biggest ones are the missing baselines, the incorrect impacts. And we escalate any problems that we find to 3M. So as we find issues, and they get right back to us. It has been a wonderful collaboration. (DESCRIPTION) New slide. Text, Steps for Successful Impact Tab Reconciliation. Before checking CDI Final Review Complete. The left side shows a screenshot from the Impact ROI tab on the dashboard. An orange arrow points to the word Codesets in the upper right corner. The tab is labeled Final Cumulative Impact. There are statistics across the top such as Estimated financial impact, weight, SOI and ROM. An orange arrow, labeled Coder's Final Codes, points to the Baseline row under D R G Type. The right side of the screen shows the Codefinder page. An orange arrow, labeled CDIS Codes, points to the two codes and their info listed under the Medicare D R G and MDC information, 177, Respiratory Infections and Inflammations with MCC, and 004, Diseases and Disorders of the Respiratory System. (SPEECH) Here is what the steps of successful impact tab reconciliation looks like. You have the coder's codes on the left, and the CGI codes are on the right. And the CDI can see their DRG and then the coder CRT. And the course, the CDI CRT should be the baseline. And then, here are all the codes that the coder code-- coded in order. And then, the query links are the little RD query templates. And this next part is the steps of how you would do a reconciliation of a case. (DESCRIPTION) New slide. Text, #1 Queries are linked to Coder's Codes. A screenshot from the Impact ROI tab showing a chart titled Final Diagnosis Codes. It shows each code, its description, and POA, Affect, MCC, CC, SOI, ROM, HCC, HAC, PPC, Elix and Baseline. Under the Query column, Sepsis with Criteria PHC for one item and CHF PHC for another item is circled. (SPEECH) Number one, as a CDI looks at their query. First of all, is it linked? Is it linked to the coder's code? The slide before was an RD query. Here, we have sepsis. It's linked to the sepsis. The CHF is linked to the CHF. And you can see the baseline diagnosis and the final diagnosis-- DRG, sorry, DRG. And the impact and all of the impacts going across the top. (DESCRIPTION) New slide. Text, #2, Home Tab: Query Green Check Mark, Except Clinical Validity Queries. A screenshot shows the Home tab, including headings for Action Items, Priority Score, Activity, Findings, Followups and Queries. The status Finalized is circled along with the green checkmark and Pulmonary Embolism PHC next to it. (SPEECH) Now, the second part that they look for is in the Home tab. Is there a link? If they did not find their query placed in the impact cab, they go and look for a link in the Home tab, and if that green check mark is missing. Except for clinical validity queries. Clinical validity queries will not have a green check mark. (DESCRIPTION) New slide titled #3, Home Tab: No open action items. It shows another screenshot from the Home tab. Under the Findings heading, a chart of codes is shown for a patient. The Elix, Baseline and Query columns are circled for one of the codes. A checkmark appears in Elix and the baseline and query are blank. In the Queries section, the status reads Finalized and the query reads malnutrition PHC. Text, CDI needs to check for correct query response diagnosis code. If not coded, send Coder notification to code the query response. (SPEECH) And third, if it still isn't linked, if they haven't figured out why it isn't linked, they go to the Home tab and there may be an open action item. This occurs if the coder did not code the exact codes from the query box. And this one was malnutrition and immediate action is required. The malnutrition code was not added by the coder. And the CDI would add a notification to the coder and let them know that their code was not coded. (DESCRIPTION) New slide titled Impact ROI Ongoing Improvement. A photo of the Bulldogs preparing to snap the football. Text, Obstacles: Accurate financial impact: collaboration with EPIC and 3M team to correctly interface coder's EPIC estimated reimbursements to 3M 360 Encompass. CDIs continuing to use final DRG comparison tab and not Impact Tab for reconciliation. CDIs missing Final Cumulative Header for agreed queries. CDIs logic for clinical validity "Was the diagnosis documented and truly supported?" cases should have zero-dollar impact. Incorrect negative and positive financial impact mostly due to incorrect Baseline Diagnosis codes. (SPEECH) Ongoing improvement. One of the things that I said, our very first hurdle that we had in our 3M team-- our wonderful Wendy and Barry, and Orlando. I worked with Barry. And we worked on getting our finances that were in the estimated financial impact in Epic, was not coming over to 3M in the impact tab for all of our then 11 facilities. At that point, we knew we had a problem. They weren't accurate with what was in Epic. So Barry worked with the Epic team and they came up with a cross spot table so that our-- all of our finances meet-- match facility per facility, DRG per DRG in Epic, in 3M now. And we check that every once in a while just to double check. But if you have any of those problems. Work with your EMR team and see if you can figure that out. Our CDI also continued to use the final DRG comparison tab instead of opening up the impact tab for reconciliation. That took some time to teach everybody click that impact tab first. Next, the CDIs were missing their final cumulative header on the agreed query, that was another learning. If you got an agreed query, what was in the header? Did you get your CC? Did you get your MCC? Was there a check mark for clinical validity? And then, a hurdle we still have some time is the CDI logic for clinical validity. Was the diagnosis documented and truly supported? And if it was documented, those cases should have a $0 impact. And then finally, incorrect negative and positive financial impact, mostly due to the incorrect baseline diagnosis codes. And we still struggle with some of those. But our team is really doing a great job. It's been two years and we are getting so much better at this. CDI will escalate anything they need to the manager that does not make sense. And then, the managers work with the CDIs to one-on-one to include those. We've even talked as a management team about not even validating those anymore because the team has been doing such an excellent job of correctly validating queries and reconciling them on their own. So we have come a long way in two years. (DESCRIPTION) New slide titled Key Secrets to Winning: Team Support. Text, #1, Successful Remote Team Support. By the time COVID-19 high admissions and continuous coding changes hit, Piedmont remote CDI had established: fairly reliable remote technology with Webex, (now Teams), CDI scorecards with productivity expectations, weekly leadership team meetings, monthly department meetings and regional manager-led team meetings,, monthly manager/CDI scorecard meetings with annual performance evaluations, addition of CDI educator, facility physician advisors with bi-weekly query reports, remote nationwide hiring and orientation, administrative support, CDI coding collaboration and buddy system, supportive Piedmont 3M Team, CDI leadership approach of education, guidance and trust. (SPEECH) So what are some secrets that we have? One is I would say team support. And I think all of our managers would agree. And I would believe that our CDI team would be proud to say that we really do have excellent support and the way we work together as a group. The 2019 going 100% remote was very key because by the time COVID hit, we already had Webex, now we use Team meetings. And fairly reliable technology, we still have our technology issues but we have good processes and we work consistently to get those fixed. We already had CDI scorecards with productivity expectations. We had weekly leadership meetings, monthly department meetings. And then, each manager of four managers in the regions, we have our own team meetings once a month. Then, our-- we have monthly manager calls one-on-one with our CDI. And then also, we do annual performance evaluations. During this time, an educator was also added, which was very key to continue the support of education across the board of not only CDIs, but at that time we did some physician education. The facility, physician advisors were added. At that point, we had a physician advisor who gets at almost every facility that gets bi-weekly query, outstanding query report. We continue that to this day. Remote national-- nationwide hiring and orientation, that was a learning curve. But we had already gone through that. And administrative support. Administration has been very supportive of us over the years despite our setbacks of COVID. And then, CDI coding. We have a collaboration, we have a buddy system. Each CDI is connected with a buddy in their region that also codes for that region and that is very key. And then, with that, our coders-- we also have our second level review team, C2E that reviews our second level reviews where CDI and coding might not see eye-to-eye. And we use our notification systems very closely as a team. And then, we have a wonderful, supportive Piedmont 3M team, I can't thank them enough. And then, our CDI leadership approaches, education, guidance and trust. And trust-- ongoing education but trust is so, so important. Our CDIs have-- some have up to 40 years nursing experience and they have a lot of knowledge they can give. And that we just have to trust each other as a team to grow and learn. And that there are going to be mistakes, but we work together. And we get through the mistakes, and we continue to grow. (DESCRIPTION) New slide titled Key Secrets to Winning: Coaching. Text, #2, Team Building, Goal-Driven Leadership. Director with vision for change, willing to take risk in new technology and provides direction to managers, bi-monthly leadership meetings. Weekly one-on-one calls with managers and educators for development, support, projects and goals. Managers and educator meet weekly to update processes through job aids, analyze tough reconciliation, escalate potential errors to 3M, submit needed enhancements to 3M, daily ongoing support of CDI team members through priority and reconciliation education, monthly scorecard calls to build relationships, review progress and goals, and inspire growth. Key factor: to trust CDIs with education provided to work autonomously. CDIs daily work to follow job aid process and to meet and exceed CD scorecard goals, promptly escalate priority or impact reconciliation problems, consistently collaborate with coders through notifications to complete reconciliation for timely billing. Department meets monthly led by director, supported by managers, educator, and CDI, coding, priority and impact education, updates and team building. (SPEECH) And that was our team support. Our next winning secret is our coaching. And like I said in the beginning, we have a wonderful director, Lori, who seize these kinds of opportunities 3M offers and instituted these changes. And to this day, she's still supporting us to add further technology and grow as a team, whether it be education or technology. And then, like I said, she has a vision. And we do all of these calls that I was talking about. And then, our managers and educator, we still meet weekly. And we go through job aids and talk reconciliation, we escalate any potential errors to 3M right away. We keep track on a spreadsheet what our errors are and what our-- what the success of resolving these things. And then, we escalate any needed enhancements. It might come out of this team or from our CDIs themselves. Several of them have had ideas for enhancements. And then, again, our key factor is to trust the CDIs with education provided to work autonomously. And that's something we all try to build into each other, and Lori also does in us as managers. And then, CDIs' daily workflow, job aid process, to meet and exceed scorecard goals. And I think Niki brought that up that we have a goal of 11 to 12 initials a day. And some of our CDIs have actually made their goal of 14 or 15 initials a day. And we often see up on our dashboard CDIs doing 45, 50 case reviews a day. And through priority worklist and the impact tab, and all of our winning coaching and education, we're able to do this. And then, the department meets monthly, led by Lori, our director. And the managers, educators, CDIs coding, we all work together to keep learning and moving on. (DESCRIPTION) New slide titled CDI Scorecard showing a chart titled CDI Query Impacts FY22. It shows information for each month of the year, as well as yearly totals, for number of queries, PDX impact, MCCs impact, CC impact, procedures impact, SOI impact, ROM Impact, number of Clinical validation queries, and financial impact, in dollars. A graphic in the corner of the slide shows the UGA bulldog standing in front of a college football national championship trophy. (SPEECH) And this is what one of our scorecards looks like. And you can see we built the impacts in. And when I was out at CEF this summer, I understand some departments have even added the HCC's on here and different other impacts that are available. You can see we have the number of queries, all of the PDXs, MCCs, CCs, procedures. And then, there are financial impacts that's at the end. And this is one of our team members that has been a CDI for 16 years. So we are very fortunate to have a very rich history of CDI in our team. And there is our winning little bulldog. (DESCRIPTION) New slide titled CDI KPI Dashboard. A chart showing PHC CDI KPI, All Admissions, (No OB, Peds, NICU), for July of '21 through June '22. The information included is Total Admissions, total admissions reviewed, percent admissions reviewed, total reviews: initial, continued stays, retrospective, CDI average chart reviews per day, query rate, query agreement rate, provider query response time/days, financial impact, increased GMLOS days by queries, CMI balance scorecard. (SPEECH) And then, this is our KPI dashboard that goes out to the administration at each of the facilities. And as you can see, one of the enhancements that we had added on here, one of the key goals for Piedmont, of course, in many hospitals is to decrease the length of stay. And our case management teams are always looking for opportunities to increase our geometric length of stay on cases. So with our queries, we can tell our case management groups, these are the number of days that our queries have added. And then, we have our case mix index from our balance scorecard, and our query rates, and our agreement rates. And one of the questions that CEF-- somebody asked me, "Well, how many queries do you not get answered?" Well, very, very, very few. Our department sends about an average of 2,000 queries a month. And we have very few non-answered. It's not acceptable to not answer a query. So even with our new integrations, we do go through a learning curve. But with the support of the administration and of the position leaders, we have been able to have a very, very low no responses so-- (DESCRIPTION) New slide. Text, Real Change for the Win! Estimated Financial Impact, up 15%. A photo of a UGA football player kissing the trophy. (SPEECH) and this is where it all comes down to what did the priority worklist, what did it bring? (DESCRIPTION) Text, CDI impacts by working D R G Priority FY20, July 2019 to May 202. Admissions reviewed, 71,406, query rate, 26%, agreement rate, 97%, physician response, 1.4 days. (SPEECH) And this is over that two year period from October, looking before at 2019 and then looking forward to 2020 through 2021, we were able to increase our estimated impact by 15%. And this is the six-month period that we looked at-- we reviewed 71,000 admissions. Our clear rates remain stable around 25% to 26%. And our physician response days are about 1.4 days. (DESCRIPTION) CDI impacts by priority and impact ROI reports FY22, July 2021 through May 2022. (SPEECH) And then, we looked at July 2021 to May 2022, and we reviewed 73,000 admissions. And remember, this is the same amount of CDIs reviewing. Again, query rate, 25%, agreement rate stayed stable at 97%, physician response, 1.5 days. (DESCRIPTION) Principal diagnosis impacts 3,134, MCCs added 6,637, C C's added 4,505, procedures added 193, GMLOS days increased, about 4,900. (SPEECH) And now, we are able to say how many principal diagnoses we've impacted, how many MCCs, CCs, procedures we added. Geometric length of stay. We increase our geometric length of stay almost 5,000 days. And our estimated impact, about 15%. And I just reran some numbers yesterday and it continues to climb. In my region, we are up over 20% from last year alone. So we are doing excellent with the impact. And (DESCRIPTION) A new slide shows photos of W. Edwards Deming and Nelson Mandela next to quotes. (SPEECH) if I was to say anything, I would say that our leadership under Lori, the four of us, plus Pam, our educator, are big things that we try to instill into each other and to our team, is education. As Nelson Mandela said, "Education is the most powerful weapon which you can use to change the world." And one of my favorite people is Edwards Deming. And he said that "85% of the reasons for failure are deficiencies in the systems and processes rather than the employee." It's usually not the employee, it's our processes. And "The role of management is to change the process rather than badgering individuals." And (DESCRIPTION) To do better. (SPEECH) what a great tool we have here in 3M, with the impact ROI and with the priority worklist to bring new processes that improve and take away some of the frustrations and lack of ability to grow. And we have grown. And to summation the whole thing, Piedmont, real change lives here. (DESCRIPTION) New slide with the Piedmont logo. Text, Real change lives here. (SPEECH) So we are ready for some questions. Awesome! Thank you so much. I love how you kind of tied that in at the end about processes and people. And there's a lot of things that within our everyday lives that really rings true of where some of the frustration comes from. And I love that Piedmont is really looking at that head on. So I applaud-- I applaud that. So before we get started, we have a few great questions that have come in. I just want to remind everyone that the certificate of attendance is available in the Resources section for download. Also, on the bottom, the menu bar, you might also see a little kind of cap-- a graduation cap. You can also access the certificate of attendance there. And then, also, I did see Linda just put in real quick that question about the slides, those are also in the Resources section that you can download as well. So please make sure you do. We will be offering this webinar on-- as an on-demand on our website in the next few weeks. Once we get all of that wrapped up, if you do want to listen in again, you certainly can on our website. So (DESCRIPTION) New slide. Text, Q&A. (SPEECH) let's get into the questions that we have. Angela asked, "Do you encounter issues with accounts becoming a priority but now have a longer LOS? Did that cause any concerns with the CDI specialists?" I can take that one, this is Niki. Absolutely, that's a great question. We encouraged our staff to escalate cases if they had any concerns. We did get cases escalated because they were unreviewed, the long length of stay. One that we did used to before prioritization review with length of stay as a sorting feature and then going by the other auto-suggested DRGs. So with this process, it was a big change. We're not able to review all cases. And we just-- we can't review 100% of cases. So there's going to be about 20% or more cases that we can't get to, we're not going to be able to review those cases. And we want the ones that we're not going to review to have the lowest probability of a query opportunity. So we would take a look and validate, and see if those cases have a longer the length of stay, should have been higher up on the worklist. And so, we'd look and validate. And we would see that those cases typically would have a low priority to review. And when you're looking at cases, is that a case that is a medical or surgical case, [INAUDIBLE] or MCC, or is that an optimized DRG with little query opportunity. And what we found is those cases really had very little potential for query opportunities. And it's one of the ones that we're willing not to have CDI review. We just cannot review all of the cases. So that was a great question. Thank you. Great! Next question is, "Who trained the coders on Impact ROI?" The coders don't use it but the CDIs do. And like I said, in that October meeting that we launched this and turned it on afterwards, I created a PowerPoint using the feature updates from 20.7 and 20.8 3M documents. And I created a PowerPoint, presented that, then each CDI was emailed that PowerPoint presentation. And they used that education. So that's how they were all educated on how to do those reconciliation steps and continue. Like I said, we continue to do education, ongoing. But it took probably, I would say, about three-- good three months probably for people to get it really down. All right, the next question we have is, "What is the time frame for CDS to perform the reconcilia-- excuse me, reconciliation processes?" They have their discharged and ready-for-final list. And it is expected that each CDI has 10 or less cases on that. And the manager is pretty much-- we go out and watch to make sure that people are doing their ready-for-final. They reconcile-- I saw on the other question out there that's also in this, they reconcile 100% of the cases they review. Of course, the impact tab-- their header only comes up on the agreed and documented queries. But they do reconcile 100% of their cases with coding. And they use the notification process. So at the end of every month, we expect that the previous months' cases will be reconciled, except for a couple queries that may be pending out there by the 8th of the following month. And for example, I went out on the systems list and looked this morning. And there were only two September cases left. One of them had the query answered that wasn't yesterday. So as of right now, there is only one query left that has not been answered from September. So-- but it is expected that they keep them ongoing because this keeps bills going out the door without delays so-- And they do a great job at it. Can you talk about some of the issues or challenges for the CDIs when starting the impact ROI? I guess-- well first, of all, just knowing does my query make an impact? OK, if it did, did I get-- was it positive, was it negative? The-- and if they misunderstood, they emailed to us, was the baseline missing or was it present-- did you start with a UTI and go to a sepsis, was the UTI code in the baseline? That kind of thing. And that was probably the first hurdle. And then, once people started to-- the CDIs realized, hey, this did make an impact on getting my tab. I'm getting a positive or negative impact. Then, it just grew from there, and very successfully. And yes, we still troubleshoot problems. What percentage of your population is built on APR? Also, do your CDIs code the record concurrently? We don't-- we totally build by DRG. There is very little that we-- so we do not use the ACR DRG. And what was the second part? Do they code concurrently? No, our coders do not code concurrently. They code-- our CDIs do, they code the cases by using the priority list. But our coders code after discharge. I think they're pretty much at a day or two-- pretty much at one to two days post-discharge the coders are coding. Do you-- [CLEARS THROAT] Excuse me, do you use standardized query templates? No, we don't. Our management team, starting somewhere around 2019 or even before, maybe 2018, we started developing our own query templates. And every fall, as it is right now, fall again, we now have 84 templates that we have written. And with every update in the fall, we go through those again and quickly look at them and make sure that they meet all of the new coding standards. And we will work-- rework words or whatever we need. But like I said, we have written our own. Another question. We probably have about-- time for about two more. Let's go with: can you elaborate on how the CDI queries increased GMLOS? Yes, with the baseline DRG, when you look at that in the header, it will tell you what the geometric length of stay is for that DRG. And when you move from pneumonia to sepsis, it'll show you that sepsis has a longer geometric length of stay. And in SSR, there is a report that you can run in the impact. There is a field for final baseline to final geometric length of stay change. And that can be pulled into your report. And that is how that is reported. All right, are your-- are Piedmont-- my goodness! Are Piedmont CDIs only Georgia-based? No, we have CDIs all over the nation. We have two in California. Hello, John. Hello, Abby. And we have-- Diana's up in Iowa, we've got Aaron and Carrie up in the Midwest-- or North, we have CDIs in the Midwest. And our director, Lori, is in Florida, where it's nice and warm. So we are all over the nation. And then, the rest of us are mostly in Georgia, but we are all over. Got to love the ability to have remote workers. I don't know what we would do if we didn't have that ability. So I applaud that. Let's go ahead with one last question. Do you have any issue with retrospective queries needing to be sent because accounts weren't re-reviewed? Yes, and we keep an eye on those. We also use the SS report-- SSR reports for recon-- for-- we look at concurrent versus retrospective queries. And as managers, , every month or so, we will run a report and look to track that and trend that. Because if people are not doing their follow-ups, they might end up with a lot of retrospective queries. And we try to keep a handle on that so that those bills aren't delayed and that we aren't impeding Piedmont's ability to balance our budget. So yes, we do keep an eye on that. And coding works but that's also very, very closely. (DESCRIPTION) New slide. Text, That's a wrap! (SPEECH) Great! Well, thank you to everyone that did submit a question that we weren't able to get to. I do want to thank our speakers today. One of the questions that was asked about submitting for CEUs. So you are able to take that certificate of attendance. And you can submit that to one of the accredited associations. You can submit that to get those. So if you do have any questions about that, there is a email-- you have the ability to email us within that menu bar as well. But you should be able to download that CEU and submit. Again, great presentation to Piedmont Health, we truly appreciate it. We will have this recording available in the next few weeks on our website if you do want to go ahead and listen in again. If you could, please complete that survey. We always love to hear how we did. And also, be on the lookout for the final webinar. Gosh, I can't even-- I can't even believe that we're already talking about the end of the year. Our final CDI innovation webinar will be in December. So be on the lookout for that so you can register. So again. thank you, Piedmont, and we look forward to hosting you all again. Thank you so much. (DESCRIPTION) New slide. Text, Thank you. (SPEECH) Thank you. Thank you.

      Webinar title slide

      Piedmont Healthcare: Taking the CDI game to the next level with priority and impact ROI

      • October 2022
      • Starting in October 2020, Piedmont’s clinical documentation integrity (CDI) team implemented 3M™ 360 Encompass™ System’s prioritization and impact ROI features. This allowed the organization to review the most impactful cases, improve documentation and simplify the reconciliation process. By July 2022, Piedmont’s CDI team began a second phase to investigate additional opportunities to improve priority worklists and refine impact ROI.
      • Learn how Piedmont was able to capture an impressive 15 percent increase in impact. In addition, hear from the team that successfully enabled CDI leadership to report increased comprehensive CDI impacts to administration with individual CDI scorecards.
    • (DESCRIPTION) Videos of speakers appear on the left. Slides are to the right. The slides show a webinar template. Text, New year, new webinar platform! A great company is showing what interesting applications a fantastic product can bring for motivated users. 3M CDI Innovation Webinar Series. (SPEECH) Good afternoon and welcome to our August-- I almost said quality. This is the CDI Innovation Series. I'm getting my months mixed up. Welcome, everybody for joining. The summer is starting to wind down, and we have kids going back to school. So hopefully everything is good in your world. And we appreciate you joining us today. Just a couple of things before we get started. (DESCRIPTION) Text, 3M Science. Applied to Life. 3M CDI Innovation Webinar Series. Applying compliant guidelines and (SPEECH) We have a great panel today just to make sure you know all of the functionalities of the On24 webinar platform. (DESCRIPTION) Text, On24 Webinar Platform for a better user experience! On 24 Webinar Platform for a better user experience! Use Google Chrome and close out of VPN/multiple tabs. Check speaker settings and refresh if you are having audio issues. Ability to move engagement sections. Ask questions! Certificate of Attendance available to download for live webinar sessions. Engagement tools and CC available. Check the resources sections. Complete the survey. The information presented herein contains the views of the presenters and does not imply a formal endorsement for consultation engagement on the part of 3M. Participants are cautioned that information contained in this presentation is not a substitute for informed judgement. The participant and/or participant's organization are solely responsible for compliance and reimbursement decisions, including those that may arise in whole or in part from participant's use of or reliance upon information contained in the presentation. 3M and the presenters disclaim all responsibility for any use made of such information. The content of this webinar has been produced by the 3M and its authorized third parties will use your personal information according to 3M's privacy policy (see Legal link). This meeting may be recorded. If you do not consent to being recorded, please exit the meeting when the recording begins. (SPEECH) This is a web-based platform. So if you are having any audio issues or any of the engagement tools, make sure you're off of VPN, close out of multiple tabs just to help with the bandwidth on your end. And you can always do a quick refresh of the browser. And Chrome is the recommended browser. So if you're in Edge or Explorer, switch on over to Chrome and that could help as well. There are some engagement tools. Please ask questions in the Q&A box. We'll get to as many as we can at the end. There is a resources section. So we do provide a certificate of attendance that you can submit for CE use. We also have the presentation handout there, as well as some other resources about our solutions. If you do need closed captioning, you can turn that on in the media player. Again, that resources section has multiple resources that are available to you. And then at the end, we always appreciate you completing the survey just to let us know how we did. (DESCRIPTION) Text, Meet our panelists. Below are photos of the five presenters. Text, Chris Berg, R H I A, CCS, CCDS-O, CHC. Colleen Deighan, R H I A, CCS, CCDS-O. Audrey Howard, R H I A. Sue Bailey, M.Ed., R H I A, CPHQ. Bobbie Starkey, R H I T, CCS-P, A H I M A. (SPEECH) Another section that we have in that dashboard is our Meet the Speakers section. So if you are interested in learning more about our speakers today, their bios are in that section. So feel free to peruse that as well. So let's go ahead and get things started. I'm going to pass today's session over to Colleen Deighan who's going to go over the agenda and just what to expect today. Again, please feel free to ask questions in the Q&A section of your dashboard, and we'll get to as many as we can at the end. Colleen? (DESCRIPTION) Text, Agenda. Provide an overview of Hierarchical Condition Categories (HCCs). Introduce the Risk Adjustment Data Validation (RAD V) program. Identify the two official coding sources used by RAD V. ICD-10-CM Official Guidelines for Coding and Reporting. American Hospital Association (AHA) Coding Clinic. Discuss the MEAT criteria. Listen to discussion among 3M panel participants. Participate in Q/A with our listeners. (SPEECH) Yeah. Thanks, Lisa. And hello to all the listeners. We appreciate you dialing in today to listen to this topic, and hopefully dialogue with us on the topic, something we talk internally about a lot. So our agenda today. We're going to provide just a brief overview of HCCs known as Hierarchical Condition Categories, introduce, maybe for some of you, the Risk Adjustment Data Validation program or what's called the RADV audit program, identify the two official sources that are used by RADV, talk also and discuss the MEAT criteria, and then we're going to have a discussion internally, amongst those of us on the panel. And then again, I hope to participate with all of you, and some question and answer towards the third section of our session today. (DESCRIPTION) Text, HCC Model overview. There are two columns below. The left column is labeled, text, CMS HCCs. Under this column, text, Developed by Centers of Medicare and Medicaid Services (CMS). For risk adjustment of the Medicare Advantage Program. CMS also developed a CMS RX HCC model for risk adjustment of Medicare Part D population. Based on aged population (65 and over). Current year data predictive of future year risk. The right column is labeled, text, HHS HCCs. Under this column, text, Developed by the Department of Health and Human Services (HHS). For risk adjustment within the commercial payer population. HHS-HCCs predict the sum of medical and drug spending. Includes all ages. Current year data used to predict current year risk. (SPEECH) So let's start with a brief overview before we begin really talking about this. Just we want to talk about the guidelines and the criteria, but we wanted also to provide just a brief overview, sort of, to set the table. So I wanted to point out that there are two models for HCCs or hierarchical condition categories, and just a brief point out of the differences. So the original model is the CMS model. It was developed by CMS for use with the Medicare Advantage program. They do have what we call Part D or prescription drug model that can be added as part of this program. As you know I use the comparison to DRGs that the HHS model-- I'm sorry, the CMS model is based on aged population, similar to the DRG model. And it uses-- this is a prospective payment model. So it's using current year's data to predict next year or future year risk around disease burden and cost of care. In comparison, the Department of Health and Human Services, known as HHS, did develop a model. It was developed around the Affordable Care Act, actually. And we see it used with commercial payer populations, including state Medicaid and Medicaid HMO programs. That program has a combined medical and drug spending model. And the two big differences between this model and the CMS model is it does include all ages. So, again, I use the APR DRG comparison to this. So all aged patients are included in the HHS model. And in knowing that this population of patients typically moves, changes jobs, goes on and off different health plans, the current year's data is used to predict current year risk. So we have a prospective model aged in an all age, current year model. (DESCRIPTION) Text, Risk Adjustment Data Validation program. HHS-RAD V - risk adjustment data validation. Department of Health and Human Services (HHS) operates. CMS, on behalf o HHS, performs risk adjustment data validation. Purpose, Ensure the integrity of risk adjustment program. Validate the accuracy of data submitted. Two types of RAD V audits are conducted. Annual national level audits - conducted to estimate the national Medicare Advantage (MA) improper payment rate. Contract-level RAD V audits - conducted to identify and recover improper payments. RAD V recognizes two resources for validating coded data. ICD-10-CM Official Guidelines for Coding and Reporting. Coding Clinic for ICD-10-CM published by the American Hospital Association. (SPEECH) If any of you have been involved in diagnosis capture for HCCs, I think, no doubt, you've heard about the Risk Adjustment Data Validation audit, known also as the RADV program. CMS, the Center for Medicare and Medicaid Services, conducts various RADV audits to ensure the accuracy and integrity of risk adjustment data that's been submitted by the Medicare Advantage payment programs. RADV is a process to verify that diagnosis is submitted for payment by Medicare Advantage organizations are supported in the medical record documentation. So we do have two types of RADV audits that I just wanted to touch on briefly. There is the annual national level audits and the contract level audits. The national audits are meant to estimate the national Medicare Advantage and proper payment rate, while the contract level audits are used to conduct and identify, recover improper payments. So as part of the RADV, one of the really important components that we wanted to talk today is that CMS is part of RADV, states that the medical records must meet five minimum-- at a minimum, five requirements to avoid what they call discrepant findings. One of those five requirements is listed here at the bottom of this page. So according-- that the RADV, when an audit is performed the claims have to be coded according to the official conventions and instructions provided within the ICD-10-CM code book, along with the official guidelines for coding and reporting and guidance that's provided by the American Hospital Association, known as the AHA coding clinic for ICD-10-CM, which is published quarterly by the American Hospital Association. And Bobbie, I'm going to turn it over to you so you can discuss a little bit more detail, these guidelines. Thank you so much, Colleen. (DESCRIPTION) Text, Accurate coding and reporting of outpatient services. ICD-10-CM Official Guidelines for Coding and Reporting. Section 1 Conventions, general coding guidelines and chapter specific guidelines. Section IV Diagnostic Coding and Reporting Guidelines for Outpatient Services. J. Code all documented conditions that coexist. Code all documented conditions that coexist at the time of the encounter/visit that require or affect patient care, treatment or management. Do not code conditions that were previously treated and not longer exist. However, history codes (categories Z80-Z87) may be used as secondary codes if the historical condition or family history has an impact on current care or influences treatment. A H A Coding Clinic Central Office. Serves as the official coding clearinghouse on the proper use of the ICD-10-CM, ICD-10-P C S, and HCPCS Level II classification systems. Provides coding advice regarding the proper application of these systems using the Alphabetic Index, Tabular list, Official Coding Guidelines, and A H A Coding Clinic advice. (SPEECH) So as Colleen mentioned-- as Colleen mentioned, there are two resources that RADV utilizes for coding validation. The first one is ICD-10-CM Official Coding Guidelines for Coding and Reporting. And the sections of these guidelines that focus on outpatient coding would be section I, which is the conventions. So the symbols, the how to use the index, those would be listed under conventions. It includes the general coding guidelines, for example coding to the greatest level of specificity, coding manifestations or complication codes. And then your chapter specific guidelines, which would be like COVID diagnosis guidelines specific to COVID, chapter 2, guidelines specific to pregnancy coding, chapter 15. And then the other section that refers to outpatient is section IV. And in this section, they talk about outpatient services, and they state that this includes hospital outpatient services, as well as physician outpatient services. So for physicians, their office visits would be considered outpatient services, and for the hospitals, it could be same-day surgeries, emergency department. It could be your ancillary tests, wound clinic, outpatient hemodialysis centers. So all of these fit under the outpatient services umbrella. They don't differentiate between HCCs and non HCCs, it's all outpatient services fall under these guidelines. And the one guideline that I want to specifically address is the letter J guideline under Section IV, and it's very important that you understand this guideline. The guideline states to code all documented conditions that coexist at the time of the encounter or visit that require or affect patient care, treatment or management. So not just coexist at the time, but they also have to require or affect patient care, treatment or management. It goes on to say his personal history, family history. Those conditions can be reported if they have an impact on current care or influence current treatment. So it's kind of vague to me. It tells me they have to affect or require care, treatment or management on this visit, but they don't specify what that looks like. So that leads us to the second resource that a RADV utilizes, and that would be the AHA Coding Clinic Advice. (DESCRIPTION) Text, Coding Clinic advice. Coding chronic conditions for outpatient encounters. ICD-10-CRM/P C S Coding Clinic, Third Quarter ICD-10 2019 Pages: 5-6 Effective with discharge: October 1, 2019. Question, A patient presents as an outpatient for hernia repair surgery. The provider notes "Crohn's disease," in the past medical history and indicates the patient is taking an immune modulating drug for the condition. Per the Official Guidelines for Coding and Reporting, Section IV.I: Chronic diseases treated on an ongoing basis may be coded and reported as many times as the patient receives treatment and care for the condition(s). Additionally, section IV.J states: Code all documents conditions that coexist at the time of the encounter/visit and require or affect patient care treatment or management. Although the patient did not receive treatment during the current encounter, is it appropriate to report the Crohn's disease as an additional diagnosis? Answer, In the outpatient setting, chronic diseases treated on an ongoing basis may be coded and reported as many times as the patient receives treatment and care for the condition(s). Based on the documentation submitted, the provider has specifically stated that the patient is receiving treatment for the Crohn's disease. Although the patient is not receiving treatment during the current encounter, the patient is receiving interval treatment; therefore, Crohn's disease should be coded and reported. The ongoing treatment does not need to occur during this encounter. The fact that the patient is undergoing treatment for Crohn's disease affects patient care and management. (SPEECH) And there are a couple of coding clinic advices that I want to specifically go over with you today. The first one is in regards to coding chronic conditions for outpatient encounters. And this advice was published third quarter 2019. It's in regards to a patient that comes in for a same-day surgery visit for a hernia repair. And the question is, the patient has Crohn's disease listed in their past medical history. The physician indicates that the patient is taking an immune modulating drug for the Crohn's. Can this Crohn's disease be reported for this same-day surgery visit for hernia repair? And the answer that coding clinic gives is, in the outpatient setting, chronic diseases treated on an ongoing basis maybe coded and reported as many times as the patient receives treatment and care for the condition or conditions. That is actually letter I in section IV of the outpatient coding guidelines that we didn't discuss. But so they go on to say, based on the documentation submitted, the provider has specifically stated the patient is receiving treatment for the Crohn's disease, and therefore it impacts care and management for this same-day surgery visit. So it would be reportable even though the patient's not getting that treatment on this visit. So unfortunately, we don't have privy to the documentation that coding clinic reviewed to come to this determination. So it's unclear how that physician specifically stated the patient was undergoing treatment for the Crohn's disease. So now we're going to move on to another coding clinic that is in regards to reporting additional diagnosis for outpatient. It came out third quarter, 2020. And this one, the question was a patient presents to the emergency department, another outpatient setting. And they were here for a symptom. But in the past medical history, the provider also documented some behavioral health conditions, and the patient had antipsychotic drugs listed in their medication list. So they're asking the question, can these conditions be reported because the patient's currently on antipsychotic medications? So the previous coding clinic, the physician specifically stated that the patient was taking the immune modulating drugs and they impacted care. In this case, the patient has listed in the medication-- or the provider has listed in the medication list antipsychotic medications and in the past medical history, the chronic condition. Coding clinic's response is no. Those mental disorders were not treated during this encounter, nor was there any documentation that these conditions affected patient care, management or treatment. The provider has to indicate that these conditions or any other conditions-- so this doesn't just apply to behavioral health conditions-- affected the management of the patient during the current visit. Those disorders would not be coded and reported. So this response is telling me, OK-- it doesn't tell me what support does look like, but it tells me it doesn't look like a condition only listed in a past medical history and a medication for that condition listed in the medication list. (DESCRIPTION) Coding Clinic advice. Reporting additional diagnoses in outpatient setting- clarification. ICD-10-CM/P C S Coding Clinic, Third Quarter ICD-10 2021 Pages: 32-33 Effective with discharges: September 20, 2021. Reporting Additional Diagnoses in Outpatient Setting. Question, Coding Clinic, Third Quarter 2020, page 33, advised against assigning a code for the patient's mental health conditions since the provider did not document that the conditions affected patient care and management. It was also noted the patient was currently on antipsychotic medications for their chronic mental health conditions. This advice seems contradictory to Coding Clinic, Third Quarter 2019, pages 5-6, where a code for Crohn's disease, a chronic autoimmune disorder, was assigned for a patient on interval immune modulating drugs to treat the condition. Coding Clinic established that ongoing treatment did not need to occur during the encounter, the fact that the patient was undergoing treatment affected patient care and management. It seems as though ongoing treatment with antipsychotic medications constitutes affecting patient care and management. Would the advice in Coding Clinic, Third Quarter 2020, change for conditions that have potential to exacerbate during care, such as autism or schizophrenia? Please provide clarification on the coding of chronic conditions in the outpatient setting. Answer, Coding professionals should not assign codes based solely on diagnoses noted in the history, problem list and/or a medication list. It is the provider's responsibility to document that the chronic condition affected care and management of the patient for that encounter. In the case published in Coding Clinic, Third Quarter 2019, pages 5-6, the provider specifically stated that the patient was receiving treatment for Crohn's disease. When the provider documents that a patient's condition or treatment thereof affects care and management for the current encounter, the condition should be reported even if treatment did not occur during the encounter. In the case published in Coding Clinic, Third Quarter 2020, page 33, codes were not assigned for the mental health conditions, since there was no provider documentation that the mental health conditions or their treatment affected patient care and management for the current encounter. If the medical record is unclear or ambiguous regarding which condition(s) affected patient care and/or management of the patient, query the provider for clarification. (SPEECH) So then, I'm still unclear on what it looks like, and I'm guessing I'm not the only one because third quarter 2021, Coding Clinic puts out a clarification for the two prior coding clinics. Someone wrote in and said, can you clarify what you talked about in these previous two coding clinics? What constitutes support for these conditions? And the answer the Coding Clinic gave was coding professionals should not assign codes based solely on diagnoses noted in the history, problem list, and/or a medication list. It's the provider's responsibility to document that the chronic condition affected care and management of the patient for that encounter. So again, they're really stressing what does not support coding those chronic conditions. (DESCRIPTION) Text, Coding Clinic advice. Hierarchical Condition Category (HCC) Coding- clarification. ICD-10-CM/P C S Coding Clinic, Second Quarter ICD-10 2022 Page: 30. Effective with discharges: June 3, 2022. Hierarchical Condition Category (HCC) Coding. Question, Is the advice published in Coding Clinic Third Quarter 2021, pp. 32-33, related to reporting additional diagnoses in the outpatient setting only if the chronic condition affected care and management of the patient for that encounter applicable to coding for hierarchical condition categories (HCC) for risk adjustment? Answer, The Coding Clinic advice that additional diagnoses in the outpatient setting must affect care and management of the patient was related to the coding for a single specific encounter in time. Coding for risk adjustment, such as for HCCs, involves the collection of known current chronic conditions over the course of a year. While a patient's chronic condition would be captured for HCC coding from other visits, encounters, or hospitalizations when the chronic condition affected care or needed management. (SPEECH) And then in June of this year, we had two new Coding Clinics come out. The first one, it's actually a clarification for HCC coding, but it refers back to those same Coding Clinics that we just looked at. And the question it's asking this previous Coding Clinic clarification regarding coding secondary diagnosis if they're documented in the problem list or the medical history and the medication list. Does that apply also to encounters for hierarchical condition category or risk adjustment coding? And the answer that Coding Clinic gives is that Coding Clinic advice for additional diagnosis in the outpatient setting and that they must affect care and management of the patient is related to the coding for a single specific encounter in time. They say coding for risk adjustment such as HCCs is over the course of a year. So if that patient goes to-- has a visit every month for that year, so 12 visits, they can take for HCC RAF reporting calculation and HCC from any one of those visits. But the visit that it comes from needs to be supported. And that visit, it needed-- that chronic condition needed to affect care, or needed to impact management or treatment. (DESCRIPTION) Text, Coding Clinic advice. Reporting additional diagnoses in outpatient setting- clarification. ICD-10-CM/P C S Coding Clinic, Second Quarter ICD-10 2022 Page: 30 Effective with discharges: June 3, 2022. Reporting Additional Diagnoses on Outpatient Setting. Question, We disagree with advice published in Coding Clinic Third Quarter 2020, page 33, regarding not coding a mental disorder during an emergency department (E D) visit for an unrelated condition because the mental disorder was not treated during the current encounter, nor was there any documentation that the condition affected patient care or management. We are requesting clarification of this advice as it appears to conflict with existing outpatient guidelines. Answer, The advice published in Third Quarter 2020 does not conflict with Official Guidelines for Coding and Reporting (Section IV.J) as it utilized the same verbiage as the guideline that states "Code all documented conditions that coexist at the time of the encounter/visit and require or affect patient care, treatment or management." (SPEECH) And then again in June 2022, another Coding Clinic question. Someone wrote in that they disagree with the advice regarding the ED visit with the mental disorders, and they say that they felt that by not coding those they were going against coding guidelines. And the answer that Coding Clinic gives, they refer that question back to official coding guidelines for coding and reporting section IV, J as it utilizes the same verbiage as the coding guideline-- code all documented conditions that coexist at the time of the encounter or visit and require or affect patient care, treatment or management. So we're seeing this consistency from Coding Clinic even though they haven't specifically told us what does the documentation need to look like to support this. (DESCRIPTION) Text, Documentation examples. Assessment/Plan: Patient was seen today for annual exam. Diagnoses and all orders for this visit: 1. Essential hypertension. 2. Hyperlipidemia, mixed. 3. CKD. 4. Acquired hypothyroidism, unspecified. Current Outpatient Medications Ordered in Epic. aspirin 81 MG EC tablet TAKE 1 TABLET BY MOUTH ONCE DAILY 90 tablet 2. atorvastatin (LIPITOR) 40 MG tablet TAKE 1 TABLET (40 MG) BY MOUTH EVERY DAY. cholecalciferol (CHOLECALCIFEROL) 1000 unit tablet Take by mouth. gabapentin (NEURONTIN) 100 MG capsule Take 100 mg by mouth 3 (three) times daily. hydrochlorothiazide (HYDRODIURIL) 25 MG tablet TAKE 1 TABLET (25 MG) BY MOUTH EVERY DAY. levothyroxine (SYNTHROID) 112 M C G tablet Take 1 tablet (112 mcg total) by mouth once daily Take on an empty stomach with a glass of water at least 30-60 minutes before breakfast. lisinopriL (ZESTRIL) 40 MG tablet Take 1 tablet (40 mg total) by mouth once daily. Assessment/Plan: Patient was seen today for annual exam. Diagnoses and all orders for this visit: 1. Essential hypertension - controlled. Continue Lisinopril. Comprehensive Metabolic Panel (CMP). 2. Hyperlipidemia, mixed. Lipid Panel W/Reflex Direct Low Density Lipoprotein (LDL) Cholesterol. 3. CKD - following with Nephrology every 6 months. 4. Acquired hypothyroidism, unspecified - stable, normal lab- a month ago. Hair loss has stopped. levothyroxine (SYNTHROID, LEVOTHROID) 112 M C G tablet; Take on an empty stomach with a glass of water at least 30-60 minutes before breakfast. (SPEECH) So I'm going to look at two documentation examples with you. These are both clinic-- office visit notes. And I'm going to preface this because these are examples. And so I put what I thought was pertinent on the screen. We don't have the whole note. So on the left hand side, we're going to pretend that-- this patient was seen for an annual exam. So we're going to pretend that all the doctor put in the rest of his note was that patient's-- make sure that patient's having up to date on all their screenings, their annual screening. So did she have her mammogram? Did she have her colonoscopy? Is she at risk for osteoporosis? Those types of questions and [INAUDIBLE] were what was on the rest of the note. But in his assessment and plan, this is what he documented, and then the med list, this is what is documented. So we have to pretend that's it. On the right hand side, I want you to know that the support for those chronic conditions doesn't have to be in the assessment and plan. It can be anywhere in that record or in that visit note. So for example, an ED note. A lot of times the ED providers will put one diagnosis in their final assessment and then in their, sometimes they call it an ED course note, sometimes they call it an ED rationale note, whatever they call it, they summarize other conditions in that note and they document we're going to give her IV fluids for dehydration or things like that. So there's additional information that can be pulled from anywhere in that visit note, that specific visit documentation can be used. It doesn't have to be in the assessment and plan. These are just what we're using for examples. So on the left hand side, you can see that there are four diagnosis listed, chronic conditions. And the medication list is there, and there are medications for those conditions. So Coding Clinic has made it clear in their advice that it's not acceptable to code from past medical history in a med list. Now, is it acceptable to code from an assessment and plan and the med list when that's all that is documented, there's nothing else documented here? So they say that the physician has to specifically document how that condition impacted the current stay and treatment, and I don't see that on the left hand side. On the right hand side, this is excellent documentation. We have the same four chronic conditions. Essential hypertension, he notes that it's controlled. He wants the patient to continue their antihypertensive medication, and he's ordering a CMP for the hypertension. The mixed hyperlipidemia. He's ordering a lipid panel. And if you look above essential hypertension, it says diagnoses and all orders for this visit. So those tests were ordered on this visit. He's assessing that condition. Number three, CKD. He notes, following with nephrology every six months. He's not treating the CKD, he's treating the hypertension. But he's monitoring the CKD, stating someone is following the CKD so that for him following the hypertension, he needs to know that that CKD is also being addressed. So that would support the CKD. Number four, acquired hypothyroidism. He notes that it's stable, patient has had a normal lab a month ago, the patient's hair loss has stopped, and he links the medication to the hypothyroidism. So to me, this supports what Coding Clinic advice and coding guidelines are saying how the physician has to specifically show how those conditions, those chronic conditions impacted the stay. Now, we still don't have any official guidance. But I'm going to turn this over to Chris now, and she's going to talk about a tool that might help us determine what constitutes-- it meets the assessed or required treatment during the specific encounter. Chris. Well. Thank you, Bobbie. So moving on to the MEAT criteria. (DESCRIPTION) Text, MEAT criteria. Where does MEAT criteria come into the picture? Monitor - signs, symptoms, disease progression, disease regression. Evaluate - test results, medication effectiveness, response to treatment. Assess - ordering tests, discussion, review records, counseling. Treat - medication, therapies, other modalities. (SPEECH) And MEAT is an acronym that I use when I am educating both physicians and coders in CDI. So MEAT acronym is-- each letter in MEAT is monitor, evaluate, assess, and treat. So there are examples of each within this slide. So when I use this as a reference tool for educating physicians when documenting what they need to document to support a diagnosis in the note, but I also use it as a education tool for coders in CDI, an auditors too, for what to look for within the documentation to support the application of official ICD-10-CM coding guidelines. So going back to what RADV looks at, the auditors in RADV, they do use the ICD-10 official coding guidelines for coding and reporting HCC diagnoses. So at this time, I will give it over to Sue, I believe. OK. Thank you, Chris. (DESCRIPTION) Text, Question, Can you talk about the hospital inpatient guidelines for coding and reporting and how they differ from outpatient guidelines? (SPEECH) So we're going to spend the next few minutes talking with our panel here about some-- what we feel are some pertinent questions, discussion topics. So this first one I'm going to direct toward Audrey. Audrey, can you talk about hospital inpatient codeing guidelines and how those differ from outpatient guidelines that we've talked about today? Yes. Thank you very much. The main part of it is very similar for the guidelines for coding secondary diagnoses, those chronic conditions, or anything that's not the principal diagnosis. It's basically on the same thinking regarding the inpatient, the outpatient. In other words, we need to make sure that it's a reportable diagnosis. And that reportable diagnosis means that it had a clinical evaluation, therapeutic treatment, that there was a diagnostic procedure that was performed towards that that it increased nursing care or it increased length of stay. And it's not that all five of those reporting criteria are necessary. It's just that at least one of them was done towards that condition that would make it a reportable secondary diagnosis. So where it kind of differs a little bit from the outpatient setting is that where Bobbie was expanding on it saying it's not just that a medication is prescribed for that condition, in the inpatient setting, it's that did they receive the medication? There's obviously a longer length of stay or there's more time for the inpatient setting and that they will be getting that medication, maybe for the congestive heart failure, for the atrial fib. And that supports the utilization of resources towards that condition that we can pick it up then as a secondary diagnosis. From the inpatient setting, we know that we can pick up possible or probable diagnoses, that if it is documented as an uncertain diagnosis at the time of discharge in the outpatient setting. However, you can only code to the highest degree of certainty. So if they're saying chest pain possible-- why am I blanking on possible causes of chest pain? But if they're saying possible heart attack, then that could be coded in the inpatient setting as long as it's documented at the time of discharge. However, in the outpatient setting, you would just be coding the chest pain as the final diagnosis. Also, one other kind of big difference in between the inpatient and outpatient setting is that on the outpatient setting you can code from a diagnostic study. The impression. The findings from a diagnostic study. However, in the inpatient setting, you need to get that information confirmed by a hands on physician if you will. So the pathology report needs to have the diagnosis from the pathology in the body of the record. The discharge summary. What were the results from the pathology? What were the results from any of the radiological tests that are done? That it showed that the physician in the patient setting needs to confirm that information in the body of the record at that point. Bobbie, anything else to add on that on this question? No. I think you touched almost everything. The only thing I might add is just to note that for outpatient visits sometimes symptoms are going to be appropriate instead of an actual condition because the patient may not have a confirmed diagnosis at the time they're discharged on an outpatient visit. But other than that, yes, you touched it. Yes. OK. Thank you, ladies. Moving on. Our next question. This one-- I'm sorry. (DESCRIPTION) Text, Question, How would you contrast and compare ICD-10-CM Official Coding Guidelines and MEAT criteria? (SPEECH) I think it didn't advance. So this one, I'm going to direct toward Colleen. Colleen, how would you contrast and compare ICD-10 CM official code and guidelines and MEAT criteria? Thanks, Sue. So Chris touched on this a little bit just a few slides ago. And what I would add for starters is that there's CMS and there's the National Center for Health Statistics. There are two departments within the federal government's department of Health and Human Services. And they are the ones that provide the ICD-10 guidelines as a set of rules developed to accompany and complement the conventions and instructions that we talked about that are within ICD-10 itself. I also want to point out that adherence to these guidelines is required under HIPAA. And that these guidelines have been adopted under HIPAA for all health care settings. So as coding professionals, documentation integrity professionals, we live and breathe these guidelines. These four sections in the POA that's added if you're on the inpatient side. And reference them often as the official sort of rules or regulations around coding and reporting. So MEAT criteria to me then-- as Chris mentioned, it's really how do I apply these guidelines. Bobbie gave you some of the examples where they talk about a condition has to be monitored or it has to impact the treatment. But they don't always give you the specifics to that. So the MEAT criteria is a good tool for applying those guidelines. As Chris mentioned, when you're educating providers on what needs to be in the documentation to support a condition. We always tell them whatever you're thinking, write it down. And they don't always do a great job of that. And it's really teaching them whatever you're thinking about the patient write it down. And that helps to support HCC capture compliantly as Chris mentioned as well. Education to coding staff, to CDI staff around just the HCC methodology as well as when they are validating conditions that have been reported. Our job as coding and documentation professionals and the CMS requirement is that we send out an accurate claim. So it's really important that we follow the guidelines, and then utilize the MEAT to apply those guidelines. Chris, anything you'd add to that? The only thing that I would add is we just want to reiterate the importance of using the official coding guidelines. And then the references available to us, such as coding clinic, that guide us for reporting diagnoses in the outpatient setting. Thank you. OK. Thank you. (DESCRIPTION) Text, Question, When is it appropriate to query the provider for clarification of an HCC diagnosis? (SPEECH) So Chris, we'll pick up with you again. When is it appropriate to query the provider for clarification of an HCC diagnosis? OK. So when in the outpatient setting and specific to office clinic visits, queries can be sent prospectively, concurrently, or retrospectively. So many outpatient CDI professionals are reviewing that record before the patient comes into the office for their appointments. And they may send a prospective query or what sometimes is called a nudge-- you may have heard it as a nudge or a notification-- to the provider for clarification. So a prospective query or any query in the outpatient setting, or a nudge, should be compliant as well as non-leading what they use on the inpatient side also. So a great reference for queries is the 2019 position paper from ACDIS and AHIMA. And it's titled, "Guidelines for Achieving a Compliant Query Practice." And this is a great tool that outpatient CDI programs can use, inpatient CDI programs can use, when developing query policies and procedures. So moving on to the question at hand, there are several reasons to query a provider regarding an HCC diagnosis. And I have just a few here. When there are clinical indicators of an HCC diagnosis, but no documentation of the condition in the notes. And an example of this is when you're reviewing the note, you see that BMI over 40 is documented. You see that within the physical exam the provider notes the patient is obese. And then he says in the note that he has discussed dietary lifestyle changes, increasing exercise. But he does not document morbid obesity. So there is an opportunity there to send a query for clarification. Another reason would be clinical evidence is found for a higher specificity of an HCC diagnosis avoiding that unspecified ICD-10 code. An example of this would be our diabetic patients that are coming into the office. And they may have labs done before coming into the office. That lab value, the blood glucose, is over 125. So do we have an opportunity there if it was not documented by the physician for diabetic with hypoglycemia? The third reason that I have is when there's a question of cause and effect. That relationship between two conditions that are documented within the note. We need to get clarification on that. And then when there's treatment documented in the notes, and then there's no documentation of that diagnosis associated with that treatment. So if a physician is adjusting medications for a specific diagnosis, but doesn't link the treatment to the diagnosis, we have an opportunity there. And again, these are just a few reasons of wanting to query or nudge a provider in the outpatient office setting. And I wanted to pull in Audrey for her take on queries. And do you have anything to add from an inpatient perspective? Thank you, Chris. It's really similar from the patient perspective. It's really when you are needing to get that clarification. When there is an indication that a diagnosis is present or that a condition was being treated, but there's no actual diagnosis documented by the license provider. So we need to get that clarification. Or even on the other side of it, sometimes the license provider will document a diagnosis but you are not seeing the evidence that it was clinically significant for the current encounter. So you may need to get that clarification just to say you know you've documented this diagnosis, but please provide additional documentation to confirm the diagnosis as evidenced by. So that you can get that kind of clarification there. From the inpatient setting, there's two requirements that a diagnosis can be added as a secondary diagnosis. One, that it's documented by the license provider. There are some exceptions to that. But that's the majority of all of your diagnoses need to be documented by the license provider. And that you can verify that it meets the reporting criteria of meeting one of those five criteria. Evaluated, monitor, treated, increased nursing care, or increased length of stay. If either one of those two requirements is not met, then that's your good query opportunity to not just say, oh, I'm not going to code it. But to say, hey, I need to go back and get either the diagnosis documented, or to get that evidence that condition was clinically significant or clinically valid. You want to get that documentation. Sue, are you on mute? We might of lost you. Sorry, I was. There you are. Yes. Here. I'm here. This last question is for Bobbie. Bobbie, for outpatient services, are chronic conditions able to be coded from an anesthesiology pre-procedural assessment? (DESCRIPTION) Text, Question, For outpatient services are chronic conditions able to be coded from an Anesthesiology pre-procedural assessment? (SPEECH) OK. I'm not going to say yes and I'm not going to say no. What I will say, we know that on an anesthesiology note though some of those chronic conditions definitely would impact the treatment by the anesthesiologist. So patients with COPD, sleep apnea, it could impact what type of airway they use. It could impact their ASA score. It could impact the type or amount of anesthesia provided. The time that the patient needs to be monitored both during and after anesthesia. So we know that those things impact what the anesthesiologist is going to do. But we need to make sure that our documentation in that pre-procedural assessment support that. So when you look at your pre-procedure assessment from anesthesiology, is he documenting a list of past medical history conditions and a list of medications? Or is he specifically documenting that those conditions are impacting. What is he doing for the COPD different? So when you look at your documentation, if it mirrors coding guidelines, if you apply MEAT to it, and you say, yes, I do have support here, then I would say report the condition. The chronic condition. If it's not mirroring coding guidelines, or if you're saying, well, he's listed in a history list. And the patient's on medication they're on. Home oxygen. That does not meet the guidance that we've been given and the coding guidelines for reporting those additional chronic conditions. So in that case, I would be hesitant to report those. Sue, anything to add? Or since you've been asking us all the questions, do you have anything you want to talk about as far as outpatient coding guidelines and reporting for HCC diagnoses? Well, thank you for your response about anesthesiology. So I think I've been thinking about the fact that we're on the eve of the 40th anniversary of the inception of in-patient perspective payment and the associated DRGs. And that's such a long time. And because of that, over the last almost 40 years, we've seen this continual expansion of the official code and guidelines every October 1st, and during the pandemic, in between as well. And we've seen volumes of coding clinic advice, as well as the advent of clinical documentation improvement to help us all achieve accurate coding and reporting, which in turn supports accurate DRG assignment and reimbursement. So I think we haven't seen as much growth definitely with regard to coding guidelines and advice in the outpatient setting over the same time period until much more recently. And I think this is organically occurring now. We're starting to see more guidelines and more conversation about proper and accurate reporting of outpatient diagnoses because one, we're seeing more care delivered in the outpatient setting. Just think about the fact that joint replacements can now be performed in the outpatient setting. Who would have ever thought that? And two, we're starting to see more reimbursement being tied to diagnosis coding in the outpatient setting. For example, the HCCs we've been talking about today. And I think compared to that inpatient setting, payment may have been more linked to or driven by procedure in the outpatient setting. So with the linkage to diagnoses, I think we're going to continue to see growth in coding guidelines. And advice on how to support outpatient coding. As well as the best approaches to achieve documentation that supports these initiatives as the industry figures out outpatient clinical documentation improvement. So I think the growth in this area will be analogous to what we've seen in the past 40 years with our inpatient perspective payment system. So people may wonder why are we talking about this today. And it's because of these situations. And in our consulting practice, the kinds of questions that we get asked from our customers and the types of reviews, and audits, and outpatient CDI programs they ask us to help with because they are really attuned and focus now on that outpatient setting. So we need this infrastructure of guidelines and advice to help us achieve all of the benefits we have with inpatient prospective payment. So that's what I was thinking about, Bobbie. Colleen or anyone else, do you have any more thoughts that you'd like to bring up today before we move into Q&A with the audience? Yeah. So this is Colleen. So just kind of building on what you were saying. So 40 years ago, the inpatient prospective payment system. And then we saw ambulatory payment classifications 22 years ago in the year 2000, which was also a prospective payment method for hospital outpatient services. So the physician side is really kind of remain fee for service. So we see again that shift from inpatient to outpatient, and the shift from fee for service into these prospective payment models. So when you think about HCCs it is a prospective payment model. And when you think about population. So the continuum of care across where that patient might see care in a given year. Our role, the providers role, the documentation integrity, our roles are around always telling the accurate story of the patient's encounter. So when we think about population health, we used to just take care of sick people. Right. And now we're being asked to manage populations of patients. So this disease burden. These chronic diseases these patients are expected to have throughout the course of the year or until their death are a big focus around getting this right and getting proper payment prospectively to care for. Predict the cost of care for this population. And remember, disease progression is expected in this population. So the example of COPD, or diabetes, or CKD, those diseases will progress. And even something like COPD, or heart failure, something along those lines. What elements does chronic respiratory failure come into play as this patient's disease progresses? So all those elements of population health are really important to think about. And I think just closing with the Department of Justice. So the public information around. You can search the Department of Justice website and see allegations and settlements. Very large settlements, including corporate integrity agreements that are happening in the fraud and abuse arena of HCCs. So again that's the purpose of Rad v is to uncover this. So always with that idea of compliance. It's interesting if you go out and read some of the fraud abuse elements of the Medicare Trust Fund related to Medicare Advantage and this perspective model. Thank you, Sue, for asking. (DESCRIPTION) Text, References. CMS Medical Record Reviewer Guidance (available on CMS website). Hyperlink, text, https://www.cms.gov/Research dash Statistics dash Data dash and dash Systems/Monitoring dash Programs/Medicare dash Risk dash Assessment dash Data dash Validation dash Program/Other dash Content dash Types/RAD V dash Docs/Medical dash Record dash Reviewer dash Guidance.pdf. 2019 HHS dash RAD V White Paper (available on CMS website). Hyperlink, text, https://www.cms.gov/files/document/2019 dash hhs dash risk dash adjustment dash data dash validation dash hhs dash rad v dash white dash paper.pdf. Hyperlink, text, ICD-10-CM Official Guidelines for Coding and Reporting F Y 2022 (available on CMS website). FY2022 April1 update ICD-10-CM Guidelines (cms.gov). Text, A H A Coding Clinic (available on the A H A website). Hyperlink, text, A H A Central Office A H A Coding Clinic (codingclinicadvisor.com). (SPEECH) Well, thank you everyone for your thoughts today. So we have time to take some questions that have come in from our audience. So Bobbie, I think I'm going to ask you to tackle this. And everyone else chime in if you have additional thoughts. But this listener asks the following question. (DESCRIPTION) Text, That's a wrap! (SPEECH) If a provider lists a diagnosis in the assessment but does not document treatment of the conditions, how is a coder supposed to know the documentation that the physician added in the assessment doesn't affect patient care? The physician puts it in the assessment for a reason. (DESCRIPTION) Text, Q&A. (SPEECH) So I would have to say, how do you know that it does affect care? He put it in his assessment. But what-- I mean, coding clinic advice tells us that he specifically needs to state that. So just documenting that condition in the final assessment or final impression doesn't tell us what did he do. Did he order tests? Did he address the patient's medication? Make a change? Send in a prescription for refills? What did he do to support that chronic condition? So I would say if you can't tell that, you apply MEAT criteria, and you can't see that, then you would need to work with your physicians to get that clarified. And in the outpatient setting, that's not going to be easy to do. Anybody have anything to add to that? Yeah, I think-- this is Sue. Another way to look at it is if it's an HCC diagnosis, the provider, the group, et cetera, is going to receive some payment, future payment, to take care of these patients if they're providing care for conditions. So the physician in my mind has to earn that right. He has to or she has to document what they are doing with regard to this condition and the patient. And if that isn't evident, or it's just mentioned casually and there's no evidence, it shouldn't be reported. Or if as Bobbie said, you really think it's making an impact. Then the physician would need to be queried and update their documentation to support that. OK. So let's move on to another question. This question-- and I'll just open this up to the group. The question says, I understand about the diabetes and similar diagnoses. Blindness and low vision are frequently listed in past medical history. Would you code this? A patient may have trouble reading a prescription bottle or being able to get an appointment. Would you code this? So it kind of goes-- this is Colleen-- kind of along the social determinants of health kind of logic too. So the fact that the patient has some form of reading disability, it's sort of like a catalog sometimes of conditions. But we really need our physician our advanced practice provider to indicate how that affected the care and treatment. We can't assume that it affected the care and the treatment. I'll use my mother as an example who's an elderly person. I direct her care where she couldn't do a lot of the things that are sometimes asked of her. So I think it just depends on the situation. And we really need our provider to help tell that story of how it impacted the care. OK. Thank you. I think this next question I'm going to direct toward you, Bobbie. In speaking of in terms of coding from areas other than the assessment and plan, is it appropriate to code from the narrative in the report? I have been given to understand that it's not. And I think, Bobbie, you touched a bit on this in some of your earlier remarks. Yes. So I know-- and I can't remember the specific reference. But for HCC outpatient coding, you can use that entire note. As long as it's documented by the provider. Now we need to be careful because with our EHRs, sometimes things get pulled in from nursing or from other screens where it's not documented by the physician. So I would be careful with those types of things. Problem list. Medication lists. But especially if a doctor is putting a narrative, and the ED was an example. A lot of times they will type a note with their medical decision making note. Or for why they're doing what they're doing. What they're running tests for-- ordering tests for. Why they're doing a CT scan. And that's a great place to support coding additional diagnosis whether they're chronic or not chronic. So yes, you can use the entire note as long as it's documented by the provider. OK. Thank you. This question I'm going to open up to the group. If a patient has diabetes mellitus, hypertension, and chronic kidney disease, and the physician documents that the chronic kidney disease is due to hypertension, how would this be coded? And I know internally we've all kind of talked about this situation. I think what this question may be asking is, what if the physician says the CKD is due to the hypertension, but they don't treat or they're not addressing the CKD. What do you do? That's what I think this questioner was trying to get at. I will say this is one of the things that we have questions about, and we are hoping that Coding Clinic will address. They're saying that you cannot report chronic conditions if they're not specifically documented as being impacting the state or requiring treatment. So what if in those cases you cannot report the CKD. It's not being evaluated. Is the coding guideline that says they both exist so the width criteria is met, and you can report both, is that enough to report both? Or now do we need to look at that differently as well. Some of the other conditions. So like a wound clinic visit where the patient has paraplegia. Does a doctor need to get specific about how that paraplegia is impacting this study. Or is it just enough to say that the patient is a paraplegic. So these are things that we're hoping that as Coding Clinic starts looking at the outpatient guidelines that they will start expanding on because now these are questions in our heads as well. Yes. And we urge people to submit these kind of conundrums to Coding Clinic. Audrey is sort of our residual liaison for Coding Clinic. And she submits a lot of our questions to them. So we've tried to be very proactive submitting these questions and waiting for a response. So next question if an ED provider treating a patient for an ankle sprain documents that the patient has asthma, and he performs a brief respiratory exam, can the asthma diagnosis be reported? The facility follows MEAT criteria. So if I can take that one too. Sure. Sure, Chris. I'll pass it back to Audrey. So did the physician treat the asthma at any time during the ED visit? Did he provide any medication? Anything like that? If they do a full physical, they're going to look at the cardiovascular system, the respiratory system, GI, neuro, and put down their findings. And there may be findings. But do they treat those findings? And do they document that they treat those findings? If there's no documentation from the physician that he is treating that condition while the is in the ED, I would not code it. And I think-- this is Colleen. If you go back to MEAT. Monitor, evaluate, assess, or treat, so the patient I would expect-- first of all, a lot of doctors do a full physical exam because they're looking for just underlying conditions as part of an ED doctor. But if the patient's coming in with an ankle sprain, I would have expected the HPI to indicate some shortness of breath or some comment about respiratory status. The fact that a respiratory exam was completed without a tie-in the assessment-- again, I can't perform a physical exam. I'm not a provider. But by the comments in the physical exam, what's the what's the ramp up to that? Is it stable? Is there a need again as Chris was mentioning for additional treatment? So the fact that an exam alone is being done is still not showing to me the monitor, evaluate, assess, or treat without making a comment about them that asthma is stable. That asthma is well controlled. Or why are you performing a physical exam on a respiratory patient who's coming in with an ankle sprain? There has to been something in the HPI that would have triggered that. Or they just do a full physical exam, which may be a standard of care. And not necessarily reportable. OK. Thank you, ladies. I think we have time for one more question. And Lisa will keep us on track with that. This question is, I am confused every time I read or hear about HCC and RAD. Is this just something that is related to MA plans? Or does it affect and apply to traditional Medicare? So this is Colleen-- or Chris. You can go ahead go ahead Chris. So I was going to just go back to what RADV stands for. It is risk adjustment data validation. And these are audits performed to ensure that there is documentation supporting those HCC diagnoses that are used in the Medicare Advantage plans. So it does tie into the outpatient setting, professional services, and HCC diagnoses and supporting documentation. And just I would add to that there certainly are other-- this is the RADV that we addressed today is specific as Chris says to the Medicare Advantage plan. There are other audit processes that are done by HHS and CMS for other programs such as how RAC is utilized or how the MACs are utilized. So there's different other methodologies of audit that are under the umbrella. But RADV in this specific is related to Medicare Advantage. And I think I would add to this, so yes, we talk about this with regard to MA plans. But overall, the outpatient coding and reporting guidelines apply to all outpatient encounters. Be it facility in emergency room, or observation, as well as professional settings such as clinic visits. Or the ED doc in the emergency department. Those visits. So you need to follow those guidelines for all outpatient coding whether HCCs are involved or not. So just to make that clarification. So Lisa, I think you're probably going to tell me that would be the last question we could take for today. Unfortunately, I am since we are right about at the two minute warning. So I am going to say let's go ahead and wrap for today. So thank you to all of our panelists. Such a wealth of information. And just shows how much our consulting services team is just truly spectacular to listen to. So thank you all. Just a couple things before we wrap up. (DESCRIPTION) Text, 3M Science. Applied to Life. 2022 3M Client Experience Summit. Reflect, Reconnect, Reinvigorate. July 18-21, Salt Lake City. If you already registered to attend the in-person event: 1. Go to the 2022 3M CES Virtual Event login page. 2. Enter your First Name, Last Name, and the Email Address that you used to register for the event. 3. Click the Next button. 4. The page will prompt you to enter a 6-digit verification code. You can find your verification code in a text message on your mobile phone and in your email. Enter the code. 5. Click the Log In button. After entering the verification code, you'll be logged in and taken to the event's Home page. If you have not yet registered to attend the event: 1. Click on the Register Now button at the top or bottom of the page. 2. Enter your registration information through the system. 3. You may need to wait for your registration to be approved after submitting your request to register. 4. After you receive a Registration Confirmation number and your registration is approved, you will be able to follow the steps above to access the virtual event. you will also receive a confirmation email that includes these instructions. (SPEECH) If you did attend our client experience summit in July, we did host that. That is for our customers who join us. Once a year it was exciting to be back in person. We did record those sessions. And so if you're already registered for that, you can log back into the virtual event site to see those sessions. That link is in the resources section if you are interested, and you did not attend, and you are a customer. You are able to log in and sign up for it. But we will be checking registration because again that is a customer specific event. But that link is in the resources section. (DESCRIPTION) Text, 3M educational boot camps for advanced CDI, pediatrics and quality training. 3M has successfully educated and trained thousands of CDI specialists and coders since the early 1990s, and we created the industry's first formal CDI program. With more than two decades of industry experience, our 3M consultants are not only experienced educators, CDI specialists and coding professionals, but they are also on the front lines working hand in hand with clients optimizing their CDI and quality programs. They take that expertise directly from the field and into the classroom, so you have the most up to date content to succeed in your role. Advanced CDI training. 3M's advanced CDI training, normally offered as part of the 3M Advanced CDI transformation program, will be offered in an engaging weeklong course. Available exclusively to 3M clients, this course will help address knowledge gaps, fundamental CDI skills and dive into the clinical and coding concepts. Upcoming training sessions: August 1-5, 2022. Advanced Quality training. The new advanced quality training takes a look at how CDI, coding and quality efforts can help or hinder an accurate reflection of the quality of care. Available exclusively to 3M clients, this course is perfect for seasoned CDI, coding and quality professionals and multidisciplinary CDI teams looking to take their quality programs to the next level. Upcoming training sessions, Nov. 14-18, 2022. Advanced CDI Pediatric training. 3M's advanced pediatric training can prepare your CDI team to recognize the unique problems and challenges in a pediatric population leading to more accurate documentation. Available exclusively to 3M clients, this week of training has been tailor made for experienced CDI, coding and quality teams and individuals looking to develop their pediatric knowledge. Upcoming training session: Sept. 12-16, 2022. Click to learn more. (SPEECH) Other resources that we have again, are for our outpatient inpatient resources. We also have our boot camps. We have several coming up in August, September, and November. So if you are interested in learning more about those, you can. And also, in the portal, if you are interested in learning more about just our solutions products and services that we have in that middle section where it says ask an expert, if you click on that, you can certainly let us know there if you'd like to learn more. (DESCRIPTION) Text, Notices. Incorporating the International Statistical Classification of Diseases and Related Health Problems - Tenth Revision (ICD-10), Copyright World Health Organization, Geneva, Switzerland. ICD-10-CM (Clinical Modification) is the United States' clinical modification of the WHO ICD-10. The International Classification of Diseases, Tenth Review, Procedure Coding System (ICD-10-P C S) was developed for the Centers for Medicare and Medicaid Services (CMS0, CMS is the US Governmental agency responsible for overseeing all changes and modifications to the ICD-10-P C S. If this presentation includes CPT or CPT Assistant: CPT is a registered trademark of the American Medical Association. This product includes CPT and/or CPT Assistant which is commercial technical data and/or computer databases and/or commercial computer software and/or commercial computer software documentation, as applicable which were developed exclusively at private expense by the American Medical Association. The responsibility for the content of any "National Correct Coding Policy" included in this product is with the Centers for Medicare and Medicaid Services and no endorsement by the AMA is intended or should be implied. The AMA disclaims responsibility for any consequences or liability attributable to or related to any use, nonuse, or interpretation of information contained in this product. If this presentation includes Coding Clinic: Coding Clinic is the official publication for ICD-10-CM/P C S coding guidelines and advice as designated by the four cooperating parties. The cooperating parties listed below have final approval of the coding advice provided in this publication: American Hospital Association, American Health Information Management Association, Centers for Medicare & Medicaid Services (formerly HCFA), National Center for Health Statistics, copyright 2020 by the American Hospital Association. All Rights Reserved. If this presentation includes UB-04 information: Copyright 2019, American Hospital Association ("A H A"), Chicago, Illinois. Reproduced with permission. No portion of this publication may be reproduced, sorted in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without prior express, written consent of A H A. (SPEECH) And we do also in this presentation, if you did download the handout, there's just some information about some of the items that were presented today. (DESCRIPTION) Text, Thank you. (SPEECH) Lastly, we certainly appreciate you completing the survey just to let us know how we did. We greatly appreciate it. And with that, I'm going to go ahead and thank you all again for attending. Our next CDI webinar will be in October. And we also have next month's quality webinars. So we have our series continuing. And we appreciate you joining us today. So thank you all for joining, and thank you to our panelists again. So have a great rest of the day.

      Webinar title slide

      Applying compliant guidelines and M.E.A.T. criteria for appropriate Hierarchical Condition Categories (HCC) diagnoses

      • August 2022
         
      • Applying compliant guidelines and M.E.A.T. (Monitoring, Evaluating, Assessing and Treatment) based on medical record documentation is a key requirement for supporting HCC coding. Join our panel of experts discussing the importance of applying guidelines and M.E.A.T. criteria as part of standard practices to ensure accurate documentation, quality patient care and improve data integrity.
    • (DESCRIPTION) A video conference. Two women sit in adjacent chat windows, wearing headphones. A text bar indicates Lisa Paulenich is present on the phone. A very small window on the screen shows a slideshow title slide. 3M C.D.I. Innovation Webinar Series. Data as a Catalyst to C.D.I. Program Performance and Physician Engagement, a Four-Step Approach. A photo shows two business people smiling in a conference room. (SPEECH) Good afternoon, and thank you everyone for joining us in our June CDI innovation webinar. Before we get started, I am going to go over a couple of housekeeping items. (DESCRIPTION) The slide changes. New year, New Platform. The additional text on the slide is too small to read. (SPEECH) If you joined us last year, and this is your first time joining us in 2022, you might notice that we have a new webinar platform. That is really here for a better experience for attendees. If you're joining today, definitely make sure you're using Google Chrome, closing out of any VPN, multiple tabs, that will help with your bandwidth. If you are having any issues with your audio, check your speaker settings and do a quick refresh. Because this is a web-based platform, there is no dial-in number. Everything is through the actual portal. We do offer closed captioning. So in the media section, if you do need closed captioning, that is available for you to start, as well. And because again, this is much more interactive, you can make the sections of the platform bigger, smaller, just so if you want to make the presentation bigger, you can see it that way. We do encourage questions. So in the Q&A section of the portal, please ask questions throughout. We'll get to as many as we can at the end. We do also have a resources section-- that is where you can download the certificate of attendance. If you want to submit those to obtain CEUs after this webinar, you can download that certificate there. The handout is also in that resources section for download, as well as a couple other items just for more information if you're interested. We do also have a survey that we would ask at the end if you can complete that-- we like to know how we do. And so before any more time passes, I'm going to go ahead and pass it over to Kaycie, who will introduce our speaker and kick things off. So Casey, go ahead. (DESCRIPTION) The woman in the right video chat speaks. The slide changes. Title, Learning Objectives. Additional bulleted text is too small to read. (SPEECH) Thank you, Lisa. Good afternoon, everyone. My name is Kaycie LaSage, I am Performance Outcomes manager with 3M. Today, I will be presenting with Carrie Wilmer, who is the CDI director for Intermountain Healthcare, formerly SCL Health. And I'll let Carrie introduce herself. (DESCRIPTION) The woman on the left smiles. The slide changes. Title, Meet Our Speaker. Two small photos of the women appear with biographical text, too small to read. (SPEECH) Good morning, good afternoon, everyone. I'm Carrie Wilmer, as Kaycie said, I am the director of the CDI program for Legacy SCL Health. We have since merged with Intermountain Healthcare, and have been newly named as the Peaks Region. So looking forward to our time together today. Yes, and I had the pleasure of working with Carrie and her team in my previous role here at 3M as a performance advisor, worked with Carrie and her team for about two years while I was their data coach. (DESCRIPTION) A title slide. Clinical Documentation. Integrity Legacy S.C.L. Health. (SPEECH) So we'll go ahead and get started with our first polling question. (DESCRIPTION) Slide change. A question and two answers appears. (SPEECH) What measures do you track to indicate physician documentation opportunity or success? (DESCRIPTION) The two answers. M.S. dash D.R.G. and Case Mix Index, C.M.I.. Quality Data, Length of stay, Patient Safety Indicators, Hospital Acquired Conditions. (SPEECH) Another minute-- seeing what the results are looking like. OK. (DESCRIPTION) Slide Change. Legacy S.C.L. Health. A map with colored sections and text too small to read. (SPEECH) Take it away, Carrie. OK, I didn't see those results come through. So we'll go ahead and just get ourselves started. So we already did our introductions, but just to give you an idea of our footprint and get the story started here today. So we consist of seven acute care facilities in Montana, Colorado. And this graphic here is our original SCL Health footprint. Of course, with that Intermountain merger, we extend much broader into the Western region at the United States. So for our CDI program, we have 41 FTEs total, and broken out into several different roles. We have 28 CDI specialists, and then 13 advanced roles as listed there, as far as the makeup of our team. So (DESCRIPTION) Slide Change. S.C.L. Health C.D.I. Program History. Three boxes of text appear, getting progressively higher on a line graph, with time as the x-axis. (SPEECH) to start with, we're going to go back in time a little bit, a few years, and give you how we got to where we are today in our agreement, or our relationship with 3M on these reports, and what we have done to move our program forward. So back in 2013 is when we first started our centralization effort. At that point in time, we had CDI programs-- teams at each of the sites, except they kind of reported up differently, different training, different tools. And we brought all of that together from a system approach. And as a system approach to centralize that process and build it one team, one system for SCL Health. So at that point in time, our response rate of 85%, agreement rate, 86%, not too bad. But we definitely were able to do more and move that needle. So at that point in time, at the beginnings of our measuring, we were at about $875,000 monthly on our DRG shift approximation with our Medicare blended rates. So 2015-2017, in that time frame, we were able to expand. We were very fortunate to be able to have pretty significant investment into the program. However, that came about as a result of external consulting assessments, and the messaging to identify that there was opportunity being left on the table. And so our executive leaders received that message, heard that message, and decided we're going to invest. And we expect that CDI is going to rise to the occasion. And fortunately, we did. So we were able to achieve new FTEs-- that was at the beginnings of our advanced CDI roles. And through the course of those couple of years, we really rose to the occasion, increasing to about a $1.8 million monthly average, with our record high being there in 2017. So from there on, 2018-2020, as would be expected with many CDI programs, established programs, you get to a point that you plateau and you don't see as much improvement any longer. And so through that time period, we experienced some leadership turnover. We were still having very successful response rates, agreement rates, as far as our physician engagement across all of our sites. But our financials dipped a little bit there at the 1.4 monthly average. (DESCRIPTION) Slide, Data as a Catalyst, Breaking through the Plateau. Two text boxes on a similar graph, time on the x-axis. (SPEECH) So we knew that we either needed to be able to validate that our plateau was true, true to form, and that there wasn't any more opportunity to be had, which was suspect, or we were going to need to find a new strategy, another way to revitalize and boost momentum, and identify what opportunity still remained for the program. So at that time, our physician education strategy was very much built on our CDI query metrics, what were our top questions, et cetera. But we came to the conclusion, we can't boil the ocean-- we were not being effective trying to do, I guess, disseminate all education to all specialties and expect that that was going to move the needle any longer. So we needed data. And that was our challenge. We did not have line of sight to identify very easily where that opportunity would be found, how much was there, and really who would we need to, as far as best utilization resources, who would we need to partner with first? Which physicians, which groups, to be able to catapult and serve as a catalyst to move the program forward? So we decided to engage with 3M and begin leveraging the performance data monitoring reports to be able to focus in and use that data to restrategize, and continue to push the messaging forward. So I'll turn it over to Kaycie. (DESCRIPTION) Slide, Performance Data Monitoring. Bulleted text beside a graphic of bright lights points connected in a web in front of a cityscape. (SPEECH) Thanks, Carrie. So reports that Carrie and her team used are in 3M's online cloud-based tool, called Performance Data Monitoring, or PDM. The reports in PDM are based on submitted inpatient claims data, looking at the total inpatient population, not just what CDI reviewed. And this allows for a more holistic view of the inpatients, helps to understand the effectiveness of CDI education of providers, and helps to identify gaps. (DESCRIPTION) Slide. Two bulleted text paragraphs appear, labeled Physician Reports and C.D.I. Performance Reports. (SPEECH) The goal of utilizing the PDM reports is to gain insight into key metrics, and performance improvement opportunities against baseline and best practices. The position reports get down to the cases attributed to a particular physician, for example, Dr. Smith, orthopedic surgeon-- what's the MCC/CC capture on Dr. Smith's cases? What SOI/ROM subclasses do Dr. Smith's cases typically fall into? And then, compare that to the other providers in Dr. Smith's practice group, the other orthopedic surgeons at the facility, and compared to the national norm or ortho. The CDI report section has both financial and quality reports. The compare report is a financial report that's based on MSGRG. And for example, you could compare the facility's performance in Q1 2021, to the performance in Q1 2022, and also against national norms, to include the metrics that are bulleted here on the slide. Severity and mortality reports are based on APR-DRG, and those compare the facility's performance against baseline and state peer groups. (DESCRIPTION) Slide, Role of the Performance Advisor. A bulleted list of text labeled Coaching with performance data advisor. Six button graphics appear beside. (SPEECH) In my role as a performance advisor, working with Carrie and her team, like I mentioned before, I was there to be their data coach. I worked extensively with them to help them understand their data in TDM, and how to effectively utilize PBM as a tool. There is a ton of data available in TDM. And like Carrie mentioned before, you can't boil the ocean. So they needed to understand where to focus their efforts for improvement. And I helped them identify focus areas that they then investigated further. (DESCRIPTION) Title slide, The Data-Driven, Physician-Focused, Four-Step Approach. (SPEECH) And now, we're on to our next polling question. (DESCRIPTION) A question appears with three answers. (SPEECH) How do you identify opportunities for physicians' CDI education? And I'll hang out here a minute before we go to the result. (DESCRIPTION) The three answers. Use C.D.I. query trends. Use claims level data. General C.D.I. industry trends. A fourth option appears to be below the bottom of the screen. (SPEECH) Let's see-- and it's submitting. Oh, there's some people submitting. Oh, 2004, I apologize. In the interest of the time, we'll go over-- so combination of all seems to be the trend. (DESCRIPTION) Slide, Data Analysis and Opportunity Identification. Step One, with a numbered list of text. A graphic of two people standing before an oversized computer monitor, twice as big as they are. (SPEECH) All right, turn it back over to you, Carrie. Yep, and that result really isn't too surprising, as far as the number of different data metrics that can come together to tell the CDI story. So we have four steps to go through, as far as how we were able to slice and dice, pull this together, present, and we're hoping that this will be helpful to simplify the process, as you work through the data, whichever form you may be using. So our first step is obviously we've got to analyze the data. We've got to identify that opportunity. So in being a multiple site system, we had to tackle this from a couple of different angles. We first went through and looked at each of the individual care sites to be able to look at their patient mix, the population, the services offered, and be able to see what the top opportunity truly was. From there, though, that we wanted to be systematic again, as best we could, to drive that education. So we looked for and identified those surface to surface themes across all of the care sites from a system perspective. From there, we took all of those data results, those top DRGs, took a sampling, and we knew that we needed to make sure that we validated the data. Some of the DRGs that rose to the top, as far as the financial opportunity, based upon this MCC/CC capture, maybe didn't truly manifest into that opportunity that we would expect. So we needed to partner both together in order to first, bring forward and identify what those opportunities were. I would also say that we were able to then partner these findings with our prioritization tool within the 3M 360 encompass, as well, and I'll be touching on that a bit more in the presentation. So Kaycie, why don't you talk through some of these screenshots of the data? (DESCRIPTION) Slide, Service Line and D.R.G. Level Opportunity. Two bar graphs are placed above a table full of text data. (SPEECH) So this is a screenshot out of PDM for one of Carrie's facilities. The graphs at the top are broken out by the MSC-RG service line. So we've got the medical opportunity and the surgical opportunity. And the estimated, or the potential revenue here, is based on MCC/CC capture opportunity. So down below, we get a little bit more granular. Here, we're looking at the triplet DRGs, and looking at the full triplet. So for that very first row, the major small large bowel procedures, we're comparing how many times the facility was in either 329 or 330, compared to how many times they were in 331. So the 63 number, the little hyperlink, is the count of cases in 329 or 330. Then, you've got the 33 cases in the DRG 331. The total cases-- the actual capture rate then the performance for this particular example happens to be MEDPAR. Then we've got the capture rate variance. And that reimbursement differential is saying that, if the facility was to capture the MCC or CC, just for the CRG cluster, at the MEDPAR 80th percentile performance, it could be a potential additional $286,000. So from here, what Carrie and her team would do, is they would drill into those 33 cases that are in the DRG 331. And get to that encounter level detail to then pull up the case in the EEHR or in 360 to see what was really going on with those cases that did not have an MCC or a CC. (DESCRIPTION) Slide change. A large table of itemized text data in 10 columns. (SPEECH) Another screenshot from PDM-- this is now looking at the mortality data. So now, we're looking at the APR-DRG service line. And we got the information from MEDPAR the state of Colorado, the facilities in Colorado. So we got all the information on the total cases from MEDPAR and the actual death and the mortality rate. And then we get into the facility-specific information. So we've got the total cases and the actual deaths in each one of the APR service lines, and then we've got our mortality rate information. So the service lines that are in the red font in the two right-hand columns with the little asterisk, those are ones that have an unfavorable mortality variance. When we get further down in the list, you can see the service lines that did have a favorable mortality variance. So looking at orthopedics, they had nine deaths when they were only expected to have 5.3. (DESCRIPTION) Slide. A.P.R. D.R.G. Level Mortality Opportunity. A large table of text data in several columns. (SPEECH) On our next slide, we see getting into the individual APR-DRG in that mortality data to then get to your drill down. So here, we're looking at the APR service line of medicine, and we've got APR-DRG 53 and 242. So again, we've got our breakdown of the cases for the MEDPAR data in the different ROM subclasses, and then we get into the facility-specific information. So you can see here for both of these APRs-- the deaths, the one death in each APR, occurred in the subclass four. So each of these APRs had an unfavorable mortality variance, but the actual death expired where we would hope that they expire. So what Carrie and her team would do from here is they would actually go into the cases that discharged alive to see if any of those cases in a one, two, or three could have moved to a higher subclass. (DESCRIPTION) Slide, Data Analysis Summary. Step 1. Bulleted text beside a graphic of hands pointing at a pad of paper. (SPEECH) Turn it back over you, Carrie. Great, so in summary, as we looked at the opportunity, we really focused on it from two angles-- from that financial opportunity, and then the mortality, as we just showed in those screenshots. So we were able to look at it from that care site level, from a service line level, and then to be able to have the power to drill down deeply into the detail of the DRGs specifically, and monitor what is happening, and to pull out those case examples for review to validate the data. All very valuable insights to be able to set us up for our second step. (DESCRIPTION) The slide changes. (SPEECH) We went too far, apologies. (DESCRIPTION) Another slide change, then it returns to the previous slide. Identify the Right Audience. Four graphic boxes arranged in a square. A dot in the center of them with arrows pointing to each space between the boxes. A paragraph of text in a list beside. (SPEECH) Oh, we just have it flip-flopped. So the second step, my apologies, is identifying the right audience. And so this here is the prioritization matrix that we used to be able to bring the opportunity forward to our care site leadership, our CMOs, have been identified as really in place of a physician advisor program. So what we did was take each of the opportunities, size, and scale, for each site, but also kind of map it, think it through, in terms of a continuum of engagement-- who would be most likely and most successful to meet with to be able to have adoption, to be able to have an engaged conversation about the opportunities, and what we may be able to do to partner with that particular group to move the needle on the data. So in this particular example, from one of the sites-- cardiovascular surgery and neurosurgery, of course, had a very high opportunity, being surgical of DRGs. And general surgery, also, very high opportunity. However, due to the contracted relationship and some of the potential politics behind the scene, it was determined that we weren't going to spend any time there. It's going to be an uphill battle. We need to just leave that be, let's focus where we're going to be successful. And so we were able to really maximize, then, the opportunity with the cardiovascular and neurosurgery groups, and the hospitalists. Orthopedics is up there in this particular example, though. They were low engagement, and low opportunity for this site. So we really had no conversation or need to explore that angle. So knowing that we all are so stretched with resources, bandwidth, and even being respectful to physician workflow, workload, and all of the demands currently, we really wanted to make sure that we were prioritizing who in which audiences we were seeking out. So some of the criteria to consider here would be to think about the group size, think about their leadership structure, the employed model, or contracted, are they private physicians, surgeons, providing services in the hospital? We also had conversations about the mid-levels, and in some cases, we did a sequenced meeting where we met with group leadership, then we met with the group, then we met with the mid-levels, because that group really expected that their PAs and MPs would be carrying much of the load of the documentation. We also incorporated our CDI query data into the mix in all of these decisions. So (DESCRIPTION) Slide Change. Audience Identification and Customization. Step Two. A bulleted list of text. (SPEECH) we took that prioritization matrix, that was really a driver of the conversations that I had with the CMOs. And so with our care site leaders, I needed their expertise and their guidance to know the personalities of whom I would be meeting with. I also needed their backup to join each of these conversations, and be able to continue to support the importance of why we were going to need to be having these conversations. It was also very eye-opening to be able to talk through specific initiatives for each of the care sites that may be different than the data that I had available, to be able to bring my CDI opportunity forward. But length of stay was one of those measures that is also directly impacted by the documentation, and the way that we were able to then marry those conversations together and join the initiatives and get further bang for the buck, as far as the engagement into that documentation. So we talked through the data, we talked through the challenges of the groups, we talked through which groups would be most optimal. We talked about even the structure and makeup of the content of the presentation, itself, which I'll get into more so with step three. But it became abundantly clear, as we set out on mapping the initial logistics of these physician education meetings, that we needed to remain mindful of culture and engagement and ensure we were doing everything we could to support as much buy-in as possible, and make it an effective use of everyone's time for the meetings to come. (DESCRIPTION) Slide Change. Effective Communication, Step 2. Bulleted text beside a graphic of a person holding a tablet with a pie chart on it. (SPEECH) So step two, know your audience. Tailor your messaging. Focus that messaging into what is going to be most valuable for that particular group. It was interesting, too, as far as even feedback from the CMOs on how much data to include in a presentation or not. So although we built a templated presentation to be able to deliver readily for any of these meetings that would come up, we definitely customized and altered each one. And I had some meetings where I had no data whatsoever-- it really emphasized case examples and more of the qualitative aspects of the documentation and what we found. And I had other presentations where the CMO wanted full data, unblinded, and make sure that those physicians could see where they fell against their peer group as transparently as possible. So there is such importance, though, to be sensitive and ensure you know the audience of who you're going to be coming in to speak with. So we have another polling question. (DESCRIPTION) Slide change. Who primarily delivers C.D.I. education to physicians at your organization. The options. C.D.I. Specialists. C.D.I. Managers or Directors. C.D.I. Educators. A fourth option is cut off the bottom of the screen. (SPEECH) Excuse me, but I'm going to see how our results look. C, CDI special. OK? (DESCRIPTION) Percentages appear for each option. Specialists at 49.4%. Managers or Directors, as well as Educators, each at 17.4%. (SPEECH) Not too surprising with the results there, especially as there's so many different makeups of CDI programs, in terms of the amount of bandwidth that any individuals may have across the team. I'm a little surprised to see that the physician advisor score was a little lower, as I know that is one very successful strategy and being able to disseminate, and peer to peer, have these discussions and education. But also good and validating to show that really, any of us can be delivering these messages. (DESCRIPTION) Slide change. Presentation Development and Delivery, Step 3. A list of bulleted text. (SPEECH) So for step three, this is the actual presentation, itself. So I have a number of screenshots, just as samples, to show how we did it, how we communicated the messaging. And a few different screenshots, too, again, back to the PDM data that was driving much of the content here. So first and foremost, we leveraged this SBAR framework. And so I'll touch on that more so in this next slide to come, but throughout the course of the content, I'm sure we all have versions of very similar presentations. But we've got to make sure we're outlining what CDI is, why does CDI matter, what is that opportunity, and where are you going to find it? What do we need from you, as a result? We've got to prove it. And so that's where the case examples come back into play. We validated the data by doing those specific case reviews, so we had an abundance of examples right there at our fingertips to be able to pull into these presentations, to be able to show this is a case, and this is what we saw, this was what we found, and this is where and how it could have been potentially different data set at the end with that final DRG. And then at the end, we of course, need to always be clear on what the ask is, and what we need from each of these providers engaging with us. (DESCRIPTION) Slide, S.B.A.R. Framework. A colored box of bulleted text beside a list of text labeled C.D.I. opportunities for St. Mary's. (SPEECH) So back to that SBAR-- so really is setting up the framework for the need and why we're having the conversation. So it maybe widely known across the audience here today, but it is definitely a key tool to be from a clinical bedside nursing perspective, and being able to succinctly communicate with the physician about changes with the patient at hand and needing to potentially report vital signs, report that lab value, and change course of treatment as a result. So we did, and literally wrote out an SBAR statement to start each of these conversations, to be able to highlight what the situation was. We have data showing we have opportunity to give that background as far as making sure that they know that they have a CDI team of registered nurses that are reviewing this documentation. That we have done that assessment to identify the opportunity, and to share what the conclusions have been and provide those recommendations. So out of the gate, give them all to them in a very, very brief skeleton of what we are here to talk about today. And then, get into the nuts and bolts of the details. (DESCRIPTION) Slide. Slide Example, Physician Education. Two boxes appear, each with a different colored graphic in it, displaying data. (SPEECH) So when it came to what is CDI? Why does CDI matter? I've seen many different depictions of CDI kind of at the center of the wheel, and how documentation and the query effort supports so many initiatives. So there's obviously so many more than even what we have listed in our slide, but this was the graphic that was most liked, in terms of really just listing most simply that, with a bit of attention and effort on that documentation, it really kind of killing two birds with one stone, that you can have a multifactor effects as a result. Then we would get into making sure that they knew that it needed to be their documentation, that the diagnoses made needed to meet four criteria to be captured, and the final code set, with the treatment, the monitoring, and evaluation, et cetera, that we then get that group of codes. We get the DRG, and it is those DRGs that are then driving all of the data measures included here. (DESCRIPTION) Slide. The graphics are replaced with tables and charts full of text. (SPEECH) So we would also give an example, just a high level example, not with supporting documentation yet at this point. But just to be able to show a DRG shift. And one of the-- as I had stated, one of those themes that really came out across many of our sites was the emphasis on length of stay. So we were able to highlight the shifts to DRG, and how the documentation would buy them more time to be able to take care of their patients. So out of the gates, just in setting up the groundwork, this is what we're looking at, this is what we're asking for. We need that specificity. We need to be able to capture the codes appropriate for that patient. We would then shift it into some mortality conversation. And although it's a bit complicated, as Kaycie already talked us through, the APR-DRG, we would be able to really focus that message to be able to show the twofold CDI approach to mortality. We want to make sure that those cases that expire are as high as possible with the risk of mortality. But it also is equally valuable to look at the generalized population within that APR that are falling to the lower levels, one, two, three, and four, to be able to explain our context, our approach, and thought process to the documentation. (DESCRIPTION) Slide. The charts are replaced with additional charts, formatted in a similar fashion. (SPEECH) Through the PDM tool, we would have the opportunity to be able to get a physician's listing of DRGs, and their MCC/CC capture variants. So just like one of those initial screenshots that Kaycie spoke through at the care site level, to be able to drill down at the physician level and see all of their claims, whether queried or not, CDI reviewed or not, but just to be able to see what their capture rate was, what their variance compared to the benchmark. And then, to be able to have those projected financial dollars to further quantify the opportunity for the physicians. So in some cases, we included this. In some cases, we did not. Really leaned on the guidance and advice from the CMOs in knowing the personalities, again, as I said earlier. So the second piece of data here is more-- it's actually a newer data point that the PDM tool was able to provide. But it was fascinating how many times I was able to pull this out to further prove the point that we may have some opportunity to move the needle. So it's very small, I realize, and blinded, blanked out with the physician names, but what that small table is really depicting are four different surgeons with their case volume. And then, it has how their volume broke out for each of the severity levels-- one, two, three, and four, with what their average length of stay was for one, two, three, and four. So for the one that's circled, and hopefully you can zoom in on it, or when you get the slide deck, if you can look more closely, but what we saw was that the length of stay was much higher for SOI three compared to four. So that kind of is counter-intuitive to what we would think as far as the amount of resources there. We wouldn't want patients staying longer at lower levels. We want to be able to maximize that. (DESCRIPTION) Slide. A bar graph on the left, and on the right, several smaller bar graphs with data. (SPEECH) So additionally, this is yet one more potential data representation, and we would again not use all data points for all presentations. We picked which ones were most fitting and most convincing for each of the conversations that were held. So in this one, what you're seeing-- this is a graphic to be able to compare the physicians break out, again, of that severity of illness, one, two, three, and four-- progressing is the larger graph on the left side. And then, to be able to compare how their percentage of SOI capture compared to their specialty, compared to their physician group, and then compare to the national norm, are the three graphs in the middle there. So in this example, this was a urologist, small site, he was his own specialty and he was his own group. So both of those did not really bring us much value, because it's the same data sets. But what was fascinating was how that graph looks for his performance compared to that national norm. So you can see the lightest peach color there is very low at that national norm. His is very high. And so he does not mimic the national trend in terms of the severity and the amount of secondary diagnoses being captured on his cases. So this was a very compelling graphic to be able to share. (DESCRIPTION) A paragraph of bulleted text beside a table of several smaller bulleted paragraphs. (SPEECH) So I mentioned before, all of those case reviews bringing in those examples, I would recommend don't have a conversation without examples at the ready to be able to talk through and to be able to give more of the context of how the data applies into a real life example. I would also recommend that any examples you have are as timely as possible. As we all know, we probably get that pushback, oh, it's not my case, or oh, that was six months ago-- I've changed my template already. I've already fixed this problem. So we heard all sorts of different responses and rebuttals to what we were seeing. We tried to just stay on track to be able to get the concepts across. But to have real examples for pertinent to the audience that you are presenting to, and as real time as possible. That last graphic there is a very oversimplified listing of many of the general themes that, of course, are red flags for CDI specialists as we review our charts. But this was actually very helpful to have a kind of a synopsis, a one-page that focused the talk on what some of those key diagnoses are. We know that the physicians aren't going to remember, they're not going to be able to keep this front of mind at all times, but this actually was a well-appreciated summary that we were able to provide. (DESCRIPTION) Slide. Initial Recommendations to Physicians, with an outline of bulleted text. (SPEECH) And then, last but not least, we make sure that we include those recommendations-- what is the ask? What is it that we need from them? So I think the slide may be still a little wordy, but ultimately summarizing the same message. And CDI talking points are often very similar, but we all know we need that comprehensive H&P, we need the progression of the documentation through the progress notes. And we need that final statement and discharge summary to wrap up that case with all of the details therein, and make sure that we are solid for each of the codes captured. And then we really emphasize the CDI query as a tool. And if necessary, that we are there to support and to help be layer and a safety net to help them get the documentation that they need it. So a lot of different angles there, slide decks, again, similar, I'm sure to many that you have out there already. (DESCRIPTION) Slide. Monitor Performance and Communicate Progress. Two lists of text. On the left, bulleted list of steps. On the right, a text bubble, What Does Success Look Like, with text below. (SPEECH) But this was our approach, this is how we were able to incorporate that data. So as we've been moving through these four steps, we've made it so that last one. But just to recap, that first step was, we've got to be able to get our data analyzed and identify that opportunity. Then, we prioritize the message and the audiences that we selected to be presenting to. And then, built the customized presentation for them. So in order to then wrap up this process, we needed to make sure that we evaluated the effectiveness of the approach, and of the education. We need to identify KPIs to ensure and identify what success was going to look like for us. And this may look different across organizations and across sites, but some to consider there on the slide would be fewer queries issued, potentially increasing your CMI, increasing the severity of illness, increasing the risk of mortality, or decreasing that mortality index. So it's important to be able to track and monitor the effectiveness, but also to be able to give the feedback. So one of those phrases that I had heard candidly was that CDI education seemed so random from our previous approaches. And that it was just a flavor of the month, and something that was in front of their minds, and then they never heard anything more about it again until that flavor came up again. So these meetings really opened the door to be able to have ongoing dialogue, and to be able to continue to keep that discussion and commentary going. So to be able to feed back the progress, or lack thereof, if nothing was really changing, then we needed to regroup and be able to further emphasize or re-educate, revisit any of the themes that we were continuing to find in the data, to continue to move that needle. So we wanted to maintain visibility of an ongoing initiative, not just a meeting, and then they weren't going to hear from us for a while. (DESCRIPTION) Slide. The Four Step Approach. A graphic of a circle cut into quadrants, with labeled text. (SPEECH) So those are the four steps. It's really very simple. Again, you got to be able to analyze the data, select your audience, deliver that presentation, and then track the outcomes and monitor that performance. (DESCRIPTION) Title Slide. Key Outcomes and Lessons Learned. (SPEECH) So for key outcomes, lessons learned, I'll go through these quickly, just to demonstrate how and why we were able to use the data and validate this approach. (DESCRIPTION) Slide. A pinwheel. Text in a circle wheel surrounded by six circles on spokes, labeled with graphics and text. (SPEECH) So the next slides really break out each of these areas of the circle. It's kind of a more holistic view of the various angles where we were able to see improvements, and tremendous steps forward with the CDI program this past year. (DESCRIPTION) Slide. A Deeper Look, Vascular Surgery. Two tables of text data, labeled At a Single Facility and Across the System. (SPEECH) So first, we're going to take a look at through a vascular surgery lens. So at one of our facilities, we met with the leadership of the vascular team and established weekly rounding-- which we had not had in place before. But through his advocacy and support and understanding of the importance of the data, the documentation driving that data, he was able to get us embedded into their weekly rounding to be able to talk through live cases. So we saw after one quarter, a real quick win on our severity of illness and risk mortality scores, as indicated there on the slide. And then, we also saw, year over year improvement as we continued that engagement with that group, that their severity index improved, CMI variance improved. And the opportunity per case, per physician, in that group on average decreased by about $2,000 for each case. So although those negatives are still there on those percentages, that means that we are still under the national norm, but some significant headway was made through that engagement. From a system perspective, we were able to incorporate the data again into the prioritization functionality in our 3M product-- 3M's 360 Encompass. And so for DRG 219-221, on the average for the increased CMI shift that occurred within just that DRG, applied to the volumes within that DRG, and with our Medicare blended rates, we approximated that we had about $388,000 gained due to increased CC and MCC capture within just that one DRG grouping for the year. (DESCRIPTION) Slide. A Deeper Look, Orthopedic Surgery. Two additional tables, similar to the previous slide, but with different data. (SPEECH) Second example here was from an orthopedic surgery lens. So this was an interesting engagement, in the fact that I was so impressed by the level of engagement and championing that this one particular surgeon had in acknowledging validating the importance of our data and all of the information that I had brought forward to him. It appeared that he had a potential opportunity of greater than $1.5 million just for his spine cases. So as those of us across the audience that do the chart reviews, we know that those spine cases can be very difficult to be able to get opportunity captured, as such. It also is very difficult for an orthopedic surgeon to feel confident and aware of all of the medical criteria that goes in making many of those medical diagnoses for their patients. So due to the size of opportunity, due to the level of engagement, there was a business case brought forward that ended up being approved to be able to achieve a nurse practitioner that would be more medically trained to be able to support this group in their documentation and covering those patients, so that he could spend more of his time in the OR where he needed to be. But we could also ensure that we had the expertise that we needed to be able to look at the medical diagnoses for those cases. So additionally, beyond that, some more of the qualitative conversations that were able to be spurred from many of these meetings across the system has been to further our own internal evaluation of the value of potential documentation assistance tools, the computer-assisted physician documentation. Specifically, more engagement and collaboration than we have ever had before as far as invitations to be at the table, and bringing our data in conjunction with much of the data sets that our quality team is using from care management, et cetera, and being able to really collaborate and move the needle moving forward. (DESCRIPTION) Slide. A Deeper Look, C.D.I. Program. Four small columns of bulleted text. (SPEECH) So more of our outcomes, probably saw it on that wheel slide, a few slides ago. But we did achieve a 36% increase in our financial totals for the year. This was in conjunction with, again, the prioritization functionality, being able to identify what DRGs had the biggest room for movement with that CC/MCC capture, we needed to make sure that we were getting CDI coverage on those DRGs. So I have a couple of them listed there, again, with those approximations of year over year of what was achieved due to increase CC/MCC capture for those DRG groupings there. In a time where we all are needing to be good stewards of our resources across all of our organizations in health care, it was a tremendous investment that we were able to achieve, as well, that it was decided that we would further expand the program this past year. And we had five new FTEs approved for the CDI program in the fall of last year. So initially, those roles were slated to be an additional educator, auditor lead, and two new CDS positions. We did shift that educator into an additional auditor, but we were able to get those posted and filled and trained, and we are working, we're working hard. From a CDI performance perspective, the query rate increased from 31% to 37% on average. So all of these data insights have really helped us be able to build our internal CDI education in partnership with each of these physician education engagements. So we are able to know what those opportunities are to be able to adjust, to make sure that we're covering the right cases, that we're asking the right questions, and that we're providing the tools and resources needed to keep the success of the CDI program moving with the opportunities for the claims. (DESCRIPTION) Slide, Final Thoughts. Two columns of bulleted text. (SPEECH) So that brings us to the end here and some of our final thoughts. So some of the challenges and our lessons learned important to call out-- the providing sustained feedback was impacted by reporting cadence. But I will own that that was our own internal decision. It's very difficult to know what that sweet spot is going to be, because there is such an abundance of data through these reports. So if you are getting the reports too frequently, may not be able to maximize each of the shifts in opportunity. But at the same time, we really have been needing to get that feedback as timely as possible to keep the engagement, and keep that conversation going, as I said. So that was just one internal finding that we have experienced and had many conversations around. Be careful with the data projections. So these are not actualized dollars, these are not actualized results to expect, to get the whole amount. The methodology is good, and it's what we have to be able to benchmark against our peers. But every patient, or every patient population at every site, is different. And there are those nuances that you've got to be able to do that case level review and validate that what the data is telling you is for opportunity, is or is not there. And be able to adjust as accordingly as a trend, from a kind of a compass. Point us in the right direction-- invaluable. But just not actualized to the exact dollars. Be careful with data getting into the wrong hands, potentially. Some of these data sets are complicated, and it takes some time to explain what the audience might be receiving. So being careful to ensure that you're either able to explain it, or to simplify that message as best you can, and not have incorrect interpretations and assumptions made, especially when it comes to some of those actualized dollars perceptions that might be out there. Limitations with physician attribution-- we definitely ran into some of this, as far as being able to identify which group, for each provider, where do they need to fall within the data mix? And I'm going to pause and shift it over to Kaycie, I think she has some more to say on that particular point. Thanks, Carrie. Yes, within PDM, there is a limitation with physician attribution. The physician attributed to a case is the attending of record at discharge. So we're not able to say that a case is attributed to a particular surgeon if they were not listed as the attending. We know that can be problematic, especially in this situation where the hospitalist is the attending, we know that they didn't perform the surgery on a surgical VRT, but it is a limitation within the system. So with the last bullet, the impact of COVID-19 in benchmarking-- so we all know, are all well aware what COVID did to us from a benchmarking perspective. And when we look back to our 2020 data, obviously, 2019 didn't have COVID in there. So from a CMI perspective, a lot of facilities saw an increase, comparing year over year, when we had 2020 data with all the COVID in it, because we had this high rate of medical VRTs, and the surgeries that could be performed for emergent. So that they were super high weighted. And then we get to 2021, where we may have had a little bit of a more normal year. And from a CMI perspective, comparing back to 2020, things didn't look so great. Risk and mortality was another one where we saw COVID have a huge impact, in that there was no COVID data in the benchmarking information. So all of the additional unexpected deaths in the pulmonary population really hurt a lot of facilities. So one of the things that we've done at 3M, and in the PDB data, we know that MEDPAR and HPOP are a handful of years behind. But one of the things that we came up with is an internal benchmark called CCB, which stands for a client comparative benchmarks. And those are based on participating 3M clients that are in this pool of data, so that our PDM customers can select which-- working with their performance advisor-- select which CCB benchmark works best for them. Are they an academic facility? Are they a smaller, rural facility? And pick the CCB that fits with their facility, and it gives us a more real-time benchmark to look at how are other 3M customers or clients that look like your facility doing? And we're able to do that in a more real time, more current fashion. Turn back over to you, Carrie. Great. So I feel a bit like a broken record at this point. But our criteria for success-- emphasize as much of what we focused on here today. So do ensure your accurate physician demographic data. So Kaycie talked about attribution, as far as that case, and who it's assigned to. But beyond that, too, there is the ability to identify and make sure that you are correct and good with which physician group, which specialty are those physicians aligned with, because that then, in turn, impacts the data that you're able to see from that internal comparison-- some of those graphics shared earlier in the presentation. Always leverage your case examples, the real examples, show those documentation opportunities and make those as real time as possible, and as applicable as possible to your audience. Garner physician care site leadership support and participation. So whether that is through your physician advisor team, whether that's through your leadership at your site, specifically, or even being able to gain that leadership to that specific physician group, invaluable, to be able to have that backing and even just one more angle of perspective to share with the group and to be able to help answer the questions that undoubtedly would be raised through each of these education sessions. Partner your data with your prioritization functionality for your CDI team, if you have it, to be able to make sure you're getting the coverage onto the DRGs that have the opportunity. Do our due diligence to make sure that we are present and we are reviewing and able to catch that opportunity, especially as those physicians are trying so diligently, so hard, to get the documentation in there. Tailor your data and presentation to each audience-- not every data point is going to necessarily be of interest or even be that compelling. So pick and choose, make sure that what you're pulling together is going to be well received and will provide the appropriate what's in it for me type strategy and hook, to be able to get their buy-in through that conversation. And then, track your results. So that is it in a nutshell. And I know there have been a few questions coming through, but I'll turn it back over. (DESCRIPTION) Title Slide. Q and A. (SPEECH) Great, thank you both so much. I mean, just a wealth of information, and such a wonderful program that you all have set up. We don't have a ton of time, so I am going to ask one question for you. How often do you use the PDM data to update prior prioritization? Great question. We are using it on a quarterly basis. So it's again difficult to get that feedback real time, to the cadence of reports, like I was talking about. But in terms of prioritization, especially, and being able to have enough time to potentially show any shifts-- if there have been changes to the DRGs, if maybe a different grouping has risen to the top of new opportunity that we need to focus in on, or if others that we have been focused in on have actually dropped and don't need to be focused DRG any longer. We want to be very judicious on how many DRGs we're identifying to be focused DRGs. You can't have all of them be focused, or it wipes out kind of the point of being able to flag those as higher. So great question, and we are reevaluating and assessing on a quarterly basis, in conjunction with these PDM reports. So I think a good follow-up question actually would be, who is reviewing that PDM information? And then, developing the action plan during those quarterly reviews? So it's a combination collaborative effort. As I mentioned at the beginning, we have a number of advanced CDI roles-- we're very fortunate to have built this education support team that we have with leads, and CDI auditors, managers, myself, educator roles. So we actually all share the wealth a bit. I do have a CDI auditor that is designated for much of the data and has become expert in terms of the reports and getting in there and being able to navigate most efficiently. So she and I partner together in terms of pulling out that PDM opportunity and data. But then when we get into the case level reviews, we really assess to see who has bandwidth at what point in time, and be able to get those cases looked at to validate the data together. In terms of the action plans, it has been a combination of myself and our managers bringing those findings to our care site leadership meetings with our CMOs, to be able to then further prioritize and determine whom we would want to be meeting with. So I hope that answers the question sufficiently. (DESCRIPTION) Title Slide. That's a wrap! (SPEECH) Yeah, I think that was perfect. Again, cannot thank you both enough for the great information that you provided today. (DESCRIPTION) Slide. Consulting and Outsourced Services Content. Three graphics of photos and text, too small to read. (SPEECH) If you are interested in learning more, that link that is in the resources section didn't work, so I posted it in the Q&A section, where you can get some of the services that we do provide with performance data monitoring. We'll be putting this recording on our website soon, so if you do want to listen in again, that will be available. The handout is available in the resources section, as well as the ability to register for our next webinar that's coming up in August. So if you do want to learn more about HCCs, you can register for that in the resources section. And again, we always appreciate your feedback. So if you have the opportunity to complete the survey, we certainly appreciate that, as well. Again, Carrie and Kaycie, thank you both so much for the information today. And we really appreciate your time, as well as everybody joining today. So thank you all again, and we look forward to seeing you in August. (DESCRIPTION) Slide. Thank you.

      Webinar title slide

      Data as a catalyst to CDI program performance and physician engagement: A four step approach

      • June 2022

        In this presentation, attendees will hear how legacy SCL Health, now the Peaks Region of Intermountain Healthcare, leveraged claims data to conduct an in-depth CDI performance reporting and analysis. Participants will learn how legacy SCL Health created a targeted strategy to engage and educate physicians in a four-step data-driven approach focused on key outcomes, early wins, expansion to all payers and increased commitment from leadership.

      • Download the handout (PDF, 2.8 MB)

    • (DESCRIPTION) A slideshow. Slide, New year, new webinar platform! A woman appears on a video call in the top left corner of the slide (SPEECH) Well hello and good afternoon and thank you for joining the first CDI innovation webinar of 2022. (DESCRIPTION) Slide, Housekeeping, a bullet point list (SPEECH) We are excited to have Tami Gomez here with us today. Before we get started, I just wanted to go ahead and go over some housekeeping items. If you were with us last year you may notice that we are using a new webinar platform. We are excited for this new enhanced user experience so before we kick things off, I just wanted to go over some of the new features and layout. There is an engagement toolbar at the bottom of your screen that you can use for the different sections of the portal. You also have the ability to move and minimize those different sections. , Because this is a web based platform there is not a dial in number to participate by phone. If you are having audio issues, please check your speaker settings, clear your cache, and refresh your browser. If you do need closed captioning, we do offer that within the live stream section that you can click on that to enable that feature. As always, we encourage questions throughout the webinar. We have a lot to get through today. So we will personally follow up after but please add all of those questions to the Q&A box below. We do provide a certificate of attendance that you can submit to obtain credits as well as the handouts for the webinar. Those can both be found in the resources section for download. If you would like to learn more about our products and solutions, you can click on the Learn More button under the slides and always, we appreciate your feedback so during the webinar, there is the ability to complete the survey in the portal or it will launch at the end of the webinar. We always appreciate your feedback. But if you do ever have a question, again, with those enhancement tools at the bottom of your screen, there is the ability to contact us. (DESCRIPTION) Slide, 3 M C D I Innovation Webinar Series, February 2022 (SPEECH) All right so before we get started, I do just want to introduce today again we have Tami Gomez as she goes over a global approach to engaging physicians and CDI operations with an AI powered CDI workflow. Tami is an a FEMA approved ICD-10 trainer and director of coding and the director of coding and CDI services at Uc Davis. Uc Davis has been named as a coding and CDI gold standard program for data analytics by Vizient and was awarded for their diversity in 2021 with Actis. And so Tami, I am going to pass things over to you so you can go ahead and get started. (DESCRIPTION) Slide, Meet our speaker. New slide, Agenda, a bullet point list. Tami appears on the video call (SPEECH) Thank you. Thanks for having me today. So today we're going to talk about how to understand or prepare tactics and how we actually leverage the 3M M Modal CDI Engage One for our impatient team. I'm going to talk about ways there is the impact of automation has on some of your key performance indicators, understanding strategies to engage our physicians, how to leverage your data and focus the work through stabilization and understanding our lessons learned in implementation. (DESCRIPTION) Slide, Why are we doing this? (SPEECH) So first we asked, why are we doing this? Leveraging technology to make CDI operations efficient, easy to manage, and partner across departments with ease, technology in many ways is really doing less with more as we are now empowered by artificial intelligence so that was really the goal here. (DESCRIPTION) Slide, Who we are, a bullet point list and a picture of a hospital (SPEECH) So it's just a little bit about who we are, Uc Davis is a 625 bed multidisciplinary academic Medical Center. We are a burn Institute and a children's hospital as well. We are in the process of building a new California tower which will add a 75 additional beds. We serve 33 counties covering about 65,000 square miles, which is an area North to the Oregon border and East of the Nevada border. We're recognized as one of the most wired hospitals in the US. We are ranked Sacramento's top hospital by US News and World report, among the nation's best in 13 medical specialties, and we've been recognized as the best hospital four years in a row in the greater Sacramento area. (DESCRIPTION) Slide, Organizational Chart: Health Information Management (Patient Revenue Cycle) (SPEECH) Just want to give you a little bit of background about the organizational chart so CDI encoding report up through the revenue cycle. There is the CFO, and then an executive director, and then I am the director over coding and CDI services but I also have a team of physician advocates and those individuals actually are physician trainers. They help with documentation integrity by building templates and smart lists and dot phrases. They have a big role in helping to ensure documentation throughout the record. There's a coding manager both on the inpatient and outpatient side, there's an outpatient CDI supervisor and inpatient CDI manager, and then we have a whole data quality integrity program as well that supports all of the analytics to drive KPIs and performance improvement. (DESCRIPTION) Slide, Homegrown auto-assignment & leveraging 3 M (SPEECH) So I'm going to start off by talking about how we were able to create a homegrown auto assignment leveraging 3M. (DESCRIPTION) Slide, Birth of Auto Assignment - No direct integration with 3 M, a list (SPEECH) The starting point to making the most of AI was how could we develop some type of automation with assigning CDI daily cases. As you know, every morning this was a very manual process for us. We would look at our admissions for that day and we'd have to manually distribute them and prioritize which ones we could review, how many people we had off, so it was a lot of manual work. It took about three to four hours to complete on a good day and on Mondays it was much worse as you can probably imagine. We had admissions from Friday, Saturday, and Sunday that we had to consider. It was our goal to fine tune this process. So as we approach going live with CDI Engage One, we also talked about how we could automate assignment for CDI. We did initial reviews, and then we looked at Tableau assignment, and that was the approach that we took. We used historical data to identify the average number of new reviews, and then we used prioritization as a form of what we would do from a hierarchical viewpoint. (DESCRIPTION) Slide, Creating the Logic: How to Start, a list (SPEECH) So how we started is we created logic. We worked with some very talented report writers who created logic where we started with the hospital service and we changed that to the hospital division. We looked back three days with logic to not duplicate. We exclude patients who are discharged so if patients been discharged they're excluded. We also excluded newborns and basically the logic looked for any baby or any newborn admission type and/or a baby girl or baby boy within their name. (DESCRIPTION) Slide, Setting Max Accounts: Eliminating Reconciliation, a table (SPEECH) So what we also did is we set the max accounts eliminating reconciliation has allowed us to assign more cases and I'll talk briefly about what we did is at Uc Davis, we had a really high coding accuracy rate. We had two independent audits done on our coding. Our coding accuracy rates are around 99.96% and I really felt that the time spent trying to determine why there was a DRG mismatch wasn't the best and that there could be a better process in place. And so we eliminated the DRG reconciliation process for the CDIs on the front end. They're not doing any DRG reconciliation but I do have a back end reviewer that takes a look at all the DRG mismatches every day and provides individual feedback with any references, whether it's a coding clinic or if it's something that was documented after their last review and provides a daily feedback to that staff which enabled them to spend about 33% more of their day doing clinical reviews concurrent. What we did is we looked at each day of the week and we decided how we wanted to create the logic to assign cases and this has been tweaked multiple times. So you may start out with saying OK, on a Monday, if we have one person on PTO we're going to assign 10 cases to every CDI but if we don't have anybody out on PTO maybe we'll do 11. So we programmed when there were holidays into the system. We've connected this to an actual Team's calendar where employees put in their time off so the logic recognizes when somebody is off and it doesn't assign a case to them. If we have two or more people out on a Monday than 11 get signed and then so on and so on. You get the gist, Tuesdays 8 and then Wednesday through Friday is 7, and if we have people working on the weekends it's 7. However, we've decided to tweak these numbers a bit and so on Monday it's 11 or 12 depending upon the circumstances. On Tuesday it's 8 or nine depending on the circumstances, and then Wednesday through Friday it's 8 to 7 depending on the circumstances. (DESCRIPTION) Slide, One-Size will not work, Program Flexibility and Triggers are key, a list (SPEECH) So one size fits all will not work. You have to be flexible and triggers are the key. We created a database to check schedules, check when there's holidays or when staff is off, and so we created all of these checkpoints to make sure that the system recognized and the logic created when not to assign a case. (DESCRIPTION) Slide, Auto-assignment & concurrent reviews: Prioritization, a bullet point list (SPEECH) So auto-assignment and concurrent reviews and the prioritization within CDI Engage One made this a little bit easier. So I'll go over that. Our challenge with auto assignment was managing concurrent reviews and organizing our current reviews. The good news is that we had 3M with the key prioritization factor to assist with managing concurrent patients. And so what we did is we customized that prioritization list to look at all accounts that had just a single CC or a single MCC. We Were looking at all mortalities, we're looking at accounts with pending queries, those are reviewed daily. We're looking at malnutrition cases because there's an organizational goal associated with that, we're looking at certain sepsis cases because of the high clinical validation denial rates that we're starting to see. This has been an ongoing and prioritization will be ongoing as our KPIs organizationally change. So digging deeper and prioritizing accounts to maintain a total of 20-40 total reviews per week right now. Our CDIs have anywhere between 36-40, not to exceed 40 total cases that they're reviewing, that includes initial and re-reviews. We also said we don't want priority, we want to remove cases from the priority list if they have two CCs or two MCCs, if they're optimized fully from the SOI and ROM perspective, and then so on. So you can really kind of customize that prioritization list to your needs and your organizational challenges and make changes to align with what you need. (DESCRIPTION) Slide, Leveraging 3 M: Concurrent review prioritization, a list (SPEECH) The current review prioritization so priority scoring four concurrent reviews can provide and assist see opportunity. So anytime there is a PSI it falls on that priority list. Medical or surgical cases without a CC or an MCC, if there's a symptom diagnosis that's driving the DRG, and then the 3M prioritization and scoring so we also can set that customize that scoring as well. If we want to focus on and make certain things a priority, we can do that organizationally. (DESCRIPTION) Slide, Scoring & Priority Factors, a screenshot (SPEECH) This is just a screenshot of scoring and the priority factors, just wanted to share that with you. It's kind of a lot on this slide so I won't go into it but we've created some customization around this so that we can make privatization effective for what our organizational needs are. (DESCRIPTION) Slide, C D I Teams: Prioritizing concurrent reviews, a screenshot (SPEECH) And this is kind of just another snapshot of the CDI Team's prioritization concurrent reviews and how they look on the screen. (DESCRIPTION) Slide, C D I: Evidence sheets - heavy lifting by tool, a screenshot (SPEECH) We also have evidence sheets as part of the CDI Engage One tool so it does a lot of the heavy lifting actually. So what this does is it alerts the CDI if there's a potential query opportunity. In some cases, this may be something your CDI already has on their radar and they're following and so it's just confirmation that they're on the right track. And sometimes it may be something that they had overlooked or missed, and this is popping up to let them know that they should either keep it on the radar and follow it or that there's a query opportunity. So we use the evidence sheets as well. (DESCRIPTION) Slide, Other incentives I P C D I evidence sheets provide, two screenshots (SPEECH) There's other incentives in inpatient CDI evidence sheets that are provided as well and this is what that looks like. So this is just another screenshot of what the evidence sheets and prioritization look like together. (DESCRIPTION) Slide, F Y 2021: Auto Assignment Data. A bar graph comparing 2020 and 2021 shows the numbers for 2021 higher in all categories (SPEECH) We also did a comparison from fiscal year 2021 and you can kind of see the impact we had as we changed to auto assignment and how many more cases we were able to get to when we compare 2020 to 2021. So this is just a slide to show by eliminating your DRG mismatch and then also using your prioritization tools, and auto assigning, and evidence sheets, all of that automation can help with increasing the number of reviews and the number of cases that your team can touch. (DESCRIPTION) Slide, Query Rate: 2020 Compared to 2021, a line graph (SPEECH) This is also a query rate comparing 2020 to 2021 so our query rates also went up as well. So what we did is we used the CDI Engage One evidence sheets, we turned on the auto assignment, and we also used our data to drive some of our improvement metrics to continue to tweak and refine on some of the processes that we put in place. Again, that's going to be ongoing. I think no matter what you're doing there's always going to be an opportunity to continue to enhance and improve on automation or processes that you've put in place or how you prioritize your reviews. (DESCRIPTION) Slide, K P I Improvement Journey: Coding and Clinical Documentation Integrity (SPEECH) I'm going to go over a little bit of our key performance improvement and the journey we had with seeing improvement. (DESCRIPTION) Slide, What we did to improve K P I's, a bullet point list (SPEECH) So what we did to improve our KPIs are we expanded our CDI program, we discontinue the reconciliation process, which I've mentioned, we perform ongoing audits both on the coding and the CDI program, we establish back end reviews and controls to ensure integrity. We've invested in technology, the CAPD, the HCC Management, CDI Engage One which includes those prioritization tools. And we do data analysis, we're big on data. And we've done a lot of work around decreasing one day stays. We found that as an organization, we were an outlier in that area and it did create some opportunities. And then template builds, utilizations of dot phrases, smart lists, et cetera. (DESCRIPTION) Slide, bullet point list continued (SPEECH) Physician buy-in also and education was key. We also had to designate physician champions both on the inpatient and the outpatient side of the house for CDI. We aligned with our physician advisors, our case management team, our quality and safety, our patient financial services, population health, and then we customized data and did an analysis that was actionable for various service lines. So we leverage data to analytics to drive improvement in documentation and operationally. (DESCRIPTION) Slide, Case management and leveraging 3 M, a screenshot (SPEECH) This next couple of slides will show some of what we've done with case management. If you're familiar with the working DRG, we basically send all cases over to case management via an interface when there is a working DRG assigned by the CDI so that they have that geometric mean length of stay to help improve our outcomes with hospital length of stay. But we also realize that, hey, they're not touching 100% of every case and what could we do to get them a working DRG on every case. Well, there is also an auto suggested DRG. So if the CDI doesn't touch the case, the CAC will come in, review the record, auto assign an MS DRG and that will also interface over to the case management team so they have that geometric mean length of stay. We did basically educate them on the fact that this is not a human being touching this, this is all AI, and that things could change by discharging. So they understand that this is just a preliminary look based on documentation in the record but it has really helped that team understand the geometric mean length of stay and how our patients should be managed in terms of trying to find or discharge them timely. (DESCRIPTION) Slide, Epic View, a screenshot (SPEECH) This is just a view of where they can see that in Epic, so again, there's an interface that goes out of 3M into our EHR and that's where they find that information in the chart. (DESCRIPTION) Slide, Case Mix Index, a line graph and two bar graphs. All three graphs show a steady increase over time (SPEECH) So this is just a snapshot of case mix index. While case mix index isn't a great indicator of CDI work, it is something that we have tracked as a KPI for CDI because we do have some impact, especially when we talk about capturing CCs and MCCs to drive that case mix up but you can see right around here is where we implemented our artificial intelligence. And you can see the impact it's had both on our adult population and our pediatric population. (DESCRIPTION) Slide, C C slash M C C Capture rates, two line graphs, two bar graphs, and a scatter plot. All graphs show a steady increase over time (SPEECH) Now while I just mentioned CMI is not always a great indicator for CDI as far as a key performance indicator, in my humble opinion CC MCC capture rates are. And as you can see here, same trend is happening with our adult case mix index or with our adult CC and MCC capture, and our pediatric CC MCC capture. Not only that, but when you come over here on this slide here to the right, you can see the trend from fiscal year 2020 to fiscal year 2021. And you can see over here where it says AMC distribution, basically these gray dots are all academic medical centers and where they fall with regard to their CC and MCC capture rate. And we're this dark blue dot here, so we're technically in the top 10% of all academic medical centers within our benchmark group, and there's 180 or so academic medical centers. And this tells me this is really a direct reflection of CDI work. In fact, I can take this data and I can quantify using some of the data that we have within 3M to show that the CC or the MCC was a direct reflection of either querying, or CAPD, or one of the metrics that we're actually using to touch cases. (DESCRIPTION) Slide, Strategies to engage physicians. New slide, Phase 1: Kicking Off the Project (Initiation of Partnerships), a bullet point list (SPEECH) The next couple of slides will be strategies on how to engage your physicians. It's not always easy kicking off the project, we really had a large group of individuals. We partnered with our system administrator, our service line medical directors, and our physicians, they're obviously key. So depending on your environment, we partnered with attending physicians to meet and kick off the project, and began to establish partnership with clinic managers and physician specialties to leverage physician connections with medical assistance and nursing teams as well. This does work virtually if executed correctly because we had to do it due to COVID so I can say without a doubt that it can be done. Again, when presenting keep it to 15 minutes and always be ready to do a demo that works perfectly. So when we were meeting with them to talk about CAPD, and why it's important, and why we are rolling this product out, there was a lot of questions about why are we doing this? This is one more thing that we have to do and really the education was focused on CAPD captured important and leveraging any data available, RAF scores, MIPS, risk adjustment. So we talked about how this product actually engages with the physician real time at the point of care. Instead of receiving a query two or three days later, this really is something that will ping you real time for you to enhance your documentation. And so you've got to keep at it, you're going to get physicians who are going to be naysayers honestly or are just not interested in hearing what you have to say. And so what we tried to do is get some champions behind us, get physicians to see the importance behind this product, and we kept at it. We kept customizing, and tweaking, and turning things on and off, and doing what we can to make this as meaningful as possible for them because if it's not meaningful for the providers, they're not going to engage with it. My one takeaway here is it was not immediately accepted or physicians weren't readily receptive to this but we kept at it we kept working with them, we kept enhancing things, we kept customizing things, and that's where we really got physician buy-in and engagement. (DESCRIPTION) Slide, Phase 2: How to Engage Physicians (Resources), a bullet point list (SPEECH) Resources are essential. So tools for physicians, tip sheets, videos, we actually sent out a video, we actually have an EMR newsletter and we sent out some information on that. So wherever we could create tools or ways or enhancements we did. Again, we kept it to five minutes. Our last video was eight minutes when we recently launched HCC Engage with our providers and the feedback was it was too long and so we condensed it. Focus on showing physicians how to answer and engage with the tool in these videos. And then your physician, you need to have educators and trainers and people that can be shoulder to shoulder with the providers if they have questions, that can train them how to use this, or walk them through every little nuance. It may be something like, how do I dock this and get it out of the way, while I'm doing my charting. And so that that's what we did is we made sure that we had somebody available for these physicians whenever they had a question or a concern. (DESCRIPTION) Slide, Phase 3: Continuous Partnership, a bullet point list (SPEECH) Again, continuing to partner. We believe in continuing partnerships with key stakeholders to leverage technology to ensure successes. We identify key stakeholders and design workflows for automation and we leverage data to facilitate engagement. Using data, going through meaning behind the nudge, and inviting physicians to the table has been extremely helpful. So when you're creating a knowledge, especially a custom nudge with the CAPD, you want to look at that clinical content to make sure that the nudges firing and it's meaningful to the providers. For example, there was some ad hoc out of the box nudges within the content guide that 3M provided, one of them was on sodium and hyponatremia and it fired when there was just one abnormal lab value and our physicians said no, we don't want that. This is what we want. We want there to be two abnormal lab findings and we also want to know this, this, and this. And so what we did is we worked with the content team at 3M. And we said we'd like to revise the current nudge that you have on hyponatremia and we want to customize it to something that is a little more meaningful to our physicians. And getting their buy-in on all of that, especially on the pediatric and the children's hospital and different things like that has really been key. So having a physician that's willing to go over the clinical content that's going to fire that nudge will be key for your organization. Again, I can't stress it enough, be flexible. Data may change, workflows will change but keep working this, the plan, and keep on making this something that is meaningful for the providers. How can we help? How can we change things? What would make this better? And getting that feedback and making those tangible changes will have impact. (DESCRIPTION) Slide, C A P D (Computer Assisted Provider slash Physician documentation) (SPEECH) So data focus and insights on rollout with physicians, so I'll go over some of that on the next slide. (DESCRIPTION) Slide, Define C A P D focus and nudge definition: Ongoing, a bullet point list (SPEECH) So focus on clinical conditions and the procedures turned on. Define what a nudge means to your provider, your community, a clinician diagnosis procedure that has clinical evidence and a physician message. Always review the data and always provide an overview of all nudges. The rule, and physician message, and then the customization as I talked about, that is really the key for us, especially with the children's hospital. There is not a whole lot of clinical content in the clinical content guide that 3M offers on the pediatric side of the house and so we really have been successful with customizing those nudges to make them meaningful for that population of patients. (DESCRIPTION) Slide, C A P D - The Why on Streamlining Physician Engagement, a list of goals (SPEECH) So the why on streamlining physician engagement. So physician documentation, guidance using evidence based clinical definitions, having a virtual conversation, to add the critical details that impact treatment and outcomes, engaging physicians at the point of care to reduce queries, and then overall quality improvement in patient care outcomes. That's your clinical decision arm really so those were the goals. But also engaging physicians at the point of care to reduce queries, what we found is that by turning on some of these nudges which are things like CHF acuity, or acute blood loss anemia, are things that physicians have been queried on routinely at our organization and have done a really good job at addressing. And so we don't have a whole lot of opportunity there. But what we found was that there was opportunity with certain things, we ran a lot of data, we looked at what our number one query was organizationally, and by service line, and got really granular, and we were very specific and deliberate about what we turned on and where we turned it on and for who. (DESCRIPTION) Slide, What is required for a nudge to fire? (Repeat slash Rewind), a picture of a fire in a fireplace (SPEECH) And then using the data that we have has really, really, really been key so that we can go back to providers and say, here is your capture rate on this diagnosis for this patient population, and here are the rest of your peers within an academic medical setting that are capturing this. And when they can see a tangible like, hey, I'm only capturing this diagnosis 5% of the time compared to my peers who are capturing 25% at a time, they're very engaged and interested in what they can do to be better at documenting this specific condition or whatever it might be. (DESCRIPTION) Slide, A Nudge Requires, a bullet point list (SPEECH) So this next slide will be basically what's required for a nudge to fire and then you really just kind of going to be on a repeat and rewind from here on out. So a nudge requires one specific criteria, a rule that points to a clinical evidence or documentation. So here's the rule, clinical evidence and documentation, we want the tool to reason over before firing. For example, clinical note says sodium is 128. The program fires this nudge for clinical diagnoses as it relates to clinical evidence. Evidence of hyponatremia, sodium level, with explicit mention of and/or physician mention or a physician message will populate in the fluency direct pill and that's part of the CAPD. And it will say something like, we have identified electrolyte imbalances, if appropriate please document the associated diagnosis. The diagnosis is hyponatremia. A clinician can replace the sodium. There really is, again, a content guide that's provided to you out of the gate from 3M and you'll have to take a look at that clinical content to see if it's actually something that you would query a provider on. And if it's not, you're going to want to tweak it and customize it to your organizational needs. (DESCRIPTION) Slide, July 2021 C A P D Data: Top 4 clinical conditions reviewed for accuracy and review. Data source: 3 M C A P D utilization reports. New slide, line graphs for five conditions (SPEECH) So this next is just a data source. This is where we're at today with the Uc Davis CAPD utilization. And I just looked at the top five nudges that we have turned on, which is diabetes, respiratory failure, a-fib, kidney disease, and cardiovascular congenital conditions. So you can see the overall compliance rate for those right now is 77% but mind you when we first went live, we were in the 25%, 30%, and so this is significant improvement in less than a year and I think if you stick to the program, you'll start to see compliance rates up there in the 80% to 90% which is where you ideally would like to be. (DESCRIPTION) Slide, a table showing diagnosis, rule, message, and evidence (SPEECH) This is just a snapshot of what the nudge rule looks like. So anemia specificity, you're going to look for the clinical diagnosis, you're going to look for the clinical rule, what the physician message looks like. We updated this for surgery because it would show up as a blood disorder and so the physicians were kind of confused about, what do you want from me? A blood disorder could mean something like pancytopenia, it can mean something like leukemia, so what is it that you want from me? So we worked to address that issue and created a custom nudge that actually said anemia. So you can see same thing for hyponatremia, acute respiratory failure, and what's actually being used in terms of the clinical rule physician message and the supporting evidence. And these are all things that can be customized. If it's not applicable to you, the content guide that's being offered to you through the vendor 3M, you can customize those which is, again, key for us because we found a lot of things that really made a difference for us when we customize them and that's where we started to see more compliant rates. (DESCRIPTION) Slide, an excerpt from the table for Heart Failure Specificity (SPEECH) This is just another snapshot of what heart failure looks like, the clinical rule, the physician message, and some of the supporting clinical evidence for the nudge to fire. (DESCRIPTION) Slide, Lessons Learned, a hierarchical data tree (SPEECH) When we talk about lessons learned, I think I've gone over some of those already. But focus on which physician groups you want to start with. We were very deliberate about that, we actually piloted a group of physicians. We had one surgeon, we have one pediatric physician, we had a hospitalist, and I think we had maybe a specialty physician as well. And we looked at all of the data that we had on our current queries and the percentage of queries we were sending, and what the top queries were, and we turned those nudges on. And we piloted it and we got a lot of feedback, and we got a lot of information that we were able to take back and improve things, and tweak things, and customize things. And before we went live, we made sure that we had all of those things, and all of that feedback was taken into consideration to improve outcomes. CAPD can work but be patient and don't give up. I mean, that was our thing as we have about 3,000 physicians turned on now. And of those 3,000, I think five were absolutely adamant that they wanted it turned off. They were great documenters already, they didn't feel they needed this, it was just one more thing that they didn't want to deal with. And so we think that's successful in our eyes. We worked with them to try to convince them about the value of this tool but at no avail so I think you have to really work with physicians and make this meaningful to them and customize it to their needs. Always acknowledge a physician when they're providing feedback, especially if they're complaining. What I like to do is say, hey, all your points are valid, what can I do to make this better? How can I help you document better? What can we do? And then we take their feedback and we work with them individually. I think when they are involved or feel like they have a voice, there are a lot more open to working with you and to engaging with the tools. (DESCRIPTION) Slide, a screenshot of a diagnostic form (SPEECH) Again, customization, know the documentation, keep things in perspective. Remember this is a computer but you can make it work. Again, customization for us, I can't say it enough, has been key. We're going to continue to customize, we are just basically scratching the surface with customization and I think we're at a point in time where we can be very deliberate and very meaningful about what we turn on or providers to ensure engagement continues to go up and the product continues to be meaningful and we continue to see impact on our overall key performance indicators. And I think that is my last slide. (DESCRIPTION) Slide, Q & A (SPEECH) It is thank you so much, Tami. The information is just incredible and what your team was able to stand up. We do have a couple questions that I think would be maybe good for you to address. We do have a little bit of extra time, just want to be cognizant of the time for everyone but the first one, thank you Deanne, who said that they really enjoyed your presentation and then also asked, is there any work you have done on the day one stays? Yes. For the one day stays, excuse me I flip-flop that. Yeah, so I'm glad you asked that question. So as you know, CDIs really can't even sometimes get to the one day stays and so we've excluded them from the reviews that are being done by the CDI team but what we did is we went to leadership and we acknowledge that when we ran data we found that we were an outlier for one day stays in terms of the percentage of patients that were here one day and went home and were counted as an inpatient admission, we were an outlier compared to our peers. And so I think it was 25% of our patients were here one day. We noted that it diluted our case mix index and diluted our CC MCC capture. It diluted our mortality and it artificially inflated our length of stay metrics. And so we went to leadership and said, it also impacts throughput, we took these numbers to our case management team and said, could they be better served in an observation status or an outpatient bed to determine appropriateness of admissions? And the other thing we said is, we can't get to them from a CDI perspective to try to optimize them. And so we took a different approach with how we were going to address one day stays and we ran data on the top DRGs for the one day stays both on peds and adults. We found on peds it was asthma and we found that there was a best practice at NYU where they created a clinical pathway in the Ed for pediatric patients and observed them at 2, 4, and 6 hours and if they had improved after six hours they were put in observation and if hadn't they were admitted. And then we found on the adult population it was something like gastroenteritis and seizures and we'd created a similar clinical pathway for those. And so the CDI team really took that information back to the clinical teams and said, here's what it looks like. Here's your top DRGs, could there be something like NYU did here at Uc Davis. And what we saw was immediately a correlation between an increase in CC MCC capture and an increase in our CMI as well as our mortality metrics. So we don't look at one day stays because the documentation, there's a delay, obviously with physicians getting documentation on the record and there's not a whole lot a whole lot of opportunity for the CDI to review them. And as you know, it's almost meaningless to review the case on day one without that documentation in there and then if they go home the next day. So that was the approach we took was operationally, could these patients be removed from our review process and from the observed outcomes and what could we do better organizationally? I hope that answered your question. Yes, absolutely. And we do have a bunch actually coming in that kind of stirred some questions so we will get to a few more here. How many nudges do you have active for each service line and how did you select which nudges to turn on? So we were very deliberate about that, again, we pulled data. So for example we pulled the hospitalist group data and looked at the top five queries for that group. And then we turned on. We were very specific about not turning on a ton of nudges, we were very deliberate about making sure that it was meaningful. So about five to seven per service line and it was driven off of the data we pulled to see which nudges were already or which queries we had already been sending from a CDI perspective. But I would urge everybody to keep it to no more than seven be deliberate about how you turn them on. So look at your current data, look at your current query patterns, look at your service lines. There are things that aren't going to be meaningful to surgeons that are meaningful to hospitalists and so that's how I would approach it. Fantastic. Before we get to the next one, I just want to answer one question quickly from Jessica who asked if CDI Engage One is available and it is. And so if you would like someone from our team to contact you, in that middle button in the portal, if you click on that, that will take you to a form to complete and we can follow up with you to talk about it. Let's go ahead if we do have time for a couple more. How will this process evolve to help with prior authorization and denials? So I think I'm not sure yet, but I do believe that there is an opportunity for us to work with making sure that we get the documentation on the record, especially with sepsis, specifically that core criteria that we're seeing denials on now. My goal is to eventually try to use this in a way where we can get documentation on the record to demonstrate medical necessity and also the clinical evidence to avoid some of those clinical validation denials that we're seeing now for things like sepsis and malnutrition. Great. We have a question that said how long after admit do you do your first review? So initial reviews are done two to three days after the admission. And then our re-reviews are done every two to three days as well, depending upon the complexity of the case and what it is they're looking at. So we give our CDI a choice. So yes, that's our current state. All right. I think this kind of goes along with it, how long should a chart be on hold for a query reply? So we have processes in place where we have query escalation. So after 48 hours, concurrently if CDI has sent a query and there's not an answer within 48 hours and the patient's still in the house, there's a query escalation process where we escalate to our physician advisors through a portal we created on Microsoft Teams. If it's a retrospective query and it's something that's being held for a CC or an MCC or procedures that will drive your DRG or change your DRG, we hold up to 10 days retrospectively only in the events where it's maybe a portable outcome for quality like I said or a procedure question. But we typically don't ever have to hold for 10 days, I will say our physicians are pretty good at getting back to us within 72 hours, retrospectively anyway. Perfect. With the nudges, how often do you evaluate the response improvement to documentation and adjust nudges to continue to target the top diagnosis? So we look at this monthly. Yeah. Wow, that's a lot. And probably a lot of work for your team. Rhonda asked, we cannot lead to a diagnosis and a query, isn't providing a diagnosis in a nudge leading in our nudges visible to others and a part of the permanent record? So we don't lead either so that's the rules that we were talking about. There must be that clinical evidence, that risk factors in that treatment, and you build the nudge to make sure that it has those things in place so that you don't lead. And what it does is it tells the provider that there is a diagnosis based on this treatment, this lab value, this X-ray finding, whatever it might be and they document that in the record. And so we're very careful about not leading the providers and having that clinical evidence to ensure the accuracy and not leading. So we are compliant with that. This is a product that only will nudge when there is clinical evidence risk factors and treatment that exist. And that's one of the things I was talking about, is sometimes the clinical evidence may be just one nudge and I wasn't-- I'm sorry, one abnormal lab finding like the sodium that we talked about earlier. And in my opinion, that could be dilution from surgery or that could be something completely unrelated to a true diagnosis of hyponatremia. So we weren't comfortable turning that on and we made deliberate changes to the clinical evidence for this to fire. So it will require some work on your end to make sure that you are not leading the provider. So our queries are a permanent part of the record, our nudges fire for the physicians basically, and they see it as they're firing and they documented in the record. Fantastic. Well, what I'm going to do is we do have a couple more questions that we will follow up with after. And so Tami I do want to thank you for your time today. (DESCRIPTION) Slide, That's a wrap! (SPEECH) We've had a lot of comments even just within just to say how great the information was and how great the presentation was so we greatly thank you for that. Just a reminder to attendees today, the certificate of attendance can be downloaded. If you do want to submit that for credits to an association you can to obtain CEUs, and we did provide the handout in the resources section, those are both there. If you are interested in learning more about the CDI Engage One, excuse me, that was discussed today, you can click that button in the middle and let us know and we'll follow up with you. The archive of this recording will be on our website in the next couple of weeks. So if you do want to go back and listen, you can. And lastly, we will be here again. And we're doing these every other month and I can't believe that it's already almost March so in April we will be back with another CDI innovation webinar so be on the lookout to register. And we appreciate your feedback so please complete that survey at the end. And so again, Tami, cannot thank you enough for your time today and so we welcome you back anytime. So have a great rest of the day and to everybody else that joined we thank you.

      Webinar title slide

      Leveraging technology to engage physicians and improve CDI operations with AI-powered CDI

      • February 2022

        UC Davis Director of Coding and CDI Services Tami Gomez and her team have a mission: Build a gold-standard CDI program, with streamlined workflows that allow physicians to focus on patient-centered care. To support this goal, UC Davis implemented 3M’s advanced AI and NLU technologies, automatically embedding clinical intelligence into normal physician and CDI workflows.

        Join Tami for an inside look at UC Davis’ operations and transformation strategy. You’ll learn how the team laid the groundwork for new technology, how they’re using automation to drive key performance indicators, and how they approach physician engagement. Tami will also cover lessons learned to date, along with how the organization is using data to continually improve and optimize.