Transcript
Rae Tompkins (00:00) hi, cushy. Hello? Good morning. Hi, Kim.
Rae Tompkins (00:11) Good morning.
Rae Tompkins (00:54) Hi, Ashley. Hi, Susan. Hello?
Rae Tompkins (01:35) Hi, Anna. Hi, Connie. Hi, hello. Alrighty. I think we have everyone, we can go ahead and get started topping off the meeting with the metrics. I apologize for the percentages that was my error. I should have spot checked before sending the most recent numbers, but I did provide an updated table with the updated percentages as well as the raw data tables that we’re pulling that information from. Were there any specific questions on the metrics? Were they still not matching optum’s.
Susan E Hulin (02:11) data? Yeah, we just wanted the raw data then that will give us the ability to look at how you were putting them together. The only thing I was curious about the statement in your email where you said the metrics aren’t static. So I was curious, can you explain that a little bit more? Because essentially, when we close the inventory for the week and you’re sending it over, that should be, you know, we might have from our end if something didn’t attach or something, you know, it’d be a little bit different. But from your perspective, whatever the status of that file is for the week and has been sent to us, that should be closed and should be the final status, correct?
Rae Tompkins (02:59) Absolutely. Yeah. And I think that was something that we were running into previously. We were double checking the week’s worth of metrics and the numbers were changing due to files that may have been closed or archived during that period that our platform didn’t catch up on. But I think based on what we were able to pull in an updated table, the numbers should be much closer to optum’s and there shouldn’t be any additional discrepancies.
Susan E Hulin (03:23) Okay. Thank you. Like I said, I was just kind of concerned with that word in the email because it should be the numbers should be the numbers. So, exactly.
Rae Tompkins (03:35) Yes, and let me know if you have any issues accessing, the reports, I think I had to send one via a Google drive due to the size, so I can try to pull that one out and resend it. If the optum team has any issues accessing that file. So just let me know. Additionally all those raw tables are available in platform. So, if you need to access them that way, just let me know. I’m happy to walk you through that process of how to access the information, it.
Susan E Hulin (04:00) Was more about, I just want to make sure that we were looking at exactly what you reported in the email. So I just want to make sure we’re aligning and so we can see… the exact data. So we can see if we would pull it or put it in the same buckets, and just want to look at it that way from that lens. Yeah.
Rae Tompkins (04:20) Of course, just let us know if you need anything else in orient. Okay?
Susan E Hulin (04:23) Thank you.
Rae Tompkins (04:26) Alrighty. I’m jumping ahead to incomplete files marked clean. I can confirm that it was a cache issue that caused this error for the files that were incomplete but marked clean, and our engineering team is working on a fix to ensure that does not happen. Moving forward. The guardrails have been put into place to ensure if a file is incomplete, it’s not marked clean. And when sent over to optum.
Niko Byron (04:48) Yeah. And just to provide a little bit more clarity on that too, anything that comes over as clean via the API is actually clean. So, you don’t have to worry about like that being like, I know that a lot of the clean files, I think the clean files don’t go to committee like that. That should be functioning as expected. Like those are truly clean files. It’s just in the actual PDF. And in the UI, the file status is what’s incorrect, where it says incomplete when it comes over as clean via the API.
Anna Jacobson (05:17) Okay. Because in that specific example, there was a PSV that was marked incomplete. Like I think it was the medicaid exclusion. So, was that actually complete? So, I think that’s that was what was confusing for that example. Is that, yeah.
Niko Byron (05:32) In the actual packet, right? Like in the PDF packet, it was marked as incomplete. Yeah.
Anna Jacobson (05:38) Like in the, I think in like the PSV part?
Niko Byron (05:43) I think I looked at that example was, that was the Terry maples, right? That was who that was. Sounds.
Anna Jacobson (05:50) Right. Yeah.
Rae Tompkins (05:51) That was her. Can?
Susan E Hulin (05:52) You bring that one up, Anna? Maybe we need to walk through it together. Yeah.
Rae Tompkins (05:56) And I can give me one second. I can pull up the screenshots that were provided. That way we can look at it together.
Rae Tompkins (06:26) There was one, I can pull up the file.
Rae Tompkins (06:39) Let me go, that was the one that was identified via email that we looked at together. But I can pull up the files that we can see. Yeah.
Niko Byron (06:45) We just pull it up in because we can show in the UI that our PSV report does show clean for her. And let me just download the packet… now because the packet also, maybe it was when it was initially downloaded. Yeah. In the, in the packet does say medicaid exclusions incomplete. But like in our platform and even in, the PSV summary in the platform, when you pull that up, it shows it’s clean.
Rae Tompkins (07:19) Are you trying to get it to load for me?
Rae Tompkins (07:58) I.
Niko Byron (07:59) think hover the shield first just to show that that’s like our overall PSV status report that our ops agents look at. And then if you pull up the, if you click the recredential link there next to credentialing file, this is very similar to the overall summary on the PDF packet. And so that medicaid exclusions, if we download this file, it will say that was different than what’s showing here. But that’s where the discrepancy is coming. Where it is actually a clean verification. It was verified on 314. As you can see there, it’s just that we generate the PDF based off of a cached version which was out of date. And that’s the bug we’re fixing.
Susan E Hulin (08:40) So, Ana, did it then really have everything it needed in that file when it came across?
Anna Jacobson (08:52) I mean, I think Connie might have to look at the packet and see because all I see is the packet saying incomplete, I.
Ashley M Frick (08:59) pulled the packet. No, I did not, the medicaid exclusion said status incomplete underneath it. So.
Susan E Hulin (09:05) That wouldn’t be appropriate. So it wasn’t done.
Ashley M Frick (09:09) Correct in the packet we.
Susan E Hulin (09:11) received. So, I think we have a bigger problem than just a cache. We also received. If there is a fresher packet that had, and it was clean, we received a packet that wasn’t clean. So, I think we have more going on there than just that.
Niko Byron (09:34) That should be resolved with the fix that’s coming.
Susan E Hulin (09:38) Yeah, but that doesn’t make any sense because you said it was clean but it wasn’t clean and the file, we would not pass an audit because of what just happened here because it’s not in here.
Susan E Hulin (09:55) So, it truly wasn’t clean.
Niko Byron (09:59) Like the packet itself, you mean? Because, right.
Rae Tompkins (10:01) Because it’s missing the actual the.
Susan E Hulin (10:03) PSV. Yeah, what?
Niko Byron (10:06) Is the, I’m just looking at the packet right now? Maybe this is something we can sync internally on. But what would like a, if we go to the medicaid exclusions verification in there, what would a clean one look like and apologies for the kind of basic… question? But if you go to the front, right? There’s like a link.
Niko Byron (10:30) Yeah, right there, and then medicaid exclusions.
Yenny Zhang (10:36) We would need to see the evidence pulled from all the like 50 plus different sites. I see. Yeah. So it’s just completely missing from this packet.
Yenny Zhang (10:51) I wonder if we could regenerate a PDF for you, would that work as an amendment? And I understand that this is obviously a problem from our end, okay?
Susan E Hulin (11:05) A couple things. So Ashley, we’ve made the correction, right? So we’re good from our end.
Ashley M Frick (11:14) I don’t know. I’d have to look into it. I didn’t look at this file.
Susan E Hulin (11:19) We’ll get back to you. My biggest concern is the bad packet… and the quality of what came through.
Ashley M Frick (11:29) Absolutely.
Rae Tompkins (11:32) Were there any other files that were other than Terry maples that this was found when looking through?
Susan E Hulin (11:48) I guess Connie and team… did we have any other ones that came that were misidentified that we’re aware of at the moment?
Ashley M Frick (12:04) I’m not aware of any other ones but they easily could have been put through and we would have never seen it if it, you know, came through clean. I’m not sure how this one got identified. Yeah.
Susan E Hulin (12:16) I think what we’re going to have to do, I guess is go through some of the clean ones. Look at them. Look at the medicare opt out because we know maybe that is an element that’s not working. We’ll have to do some post auditing to see what’s going on with them and get back to you. But again, I’m concerned because.
Susan E Hulin (12:41) This is a huge mess for us. So if we have a bigger problem, we’re going to have to look at it. But again, Nico, when you were solutioning for the problem, it’s not just clearing of the cache. It’s.
Rae Tompkins (12:58) the actual packet, it’s.
Susan E Hulin (13:00) the actual packet, it’s obvious. Thank you. And so, I’m not, I guess we need to figure out how you’re going to program this into your system so that we get the correct information coming over to us.
Niko Byron (13:14) Yeah. Is it more of a, is the question more of like a backfill of the ones that could be missed or like going forward?
Susan E Hulin (13:22) Well, we have two problems we’ve been utilizing. We’ve been.
Susan E Hulin (13:32) We don’t know what the problem is. I guess we don’t know how many we have that came through. We’ll have to do, we’ll have to go back through and spend resources to review these files to ensure that we don’t have a gap and we’re not at risk. Second. We need a solution for going forward that we can be confident of because we’ve put a lot of resources into evolving this new process for us. And prior to putting this process in place, we were reviewing the files that were coming over clean and felt we were confident of the quality that was coming over. But right now, I pause, we have our ncqa audit coming up this year. And so I’m concerned about the quality that we have here right now because of this and some other things that have come over the last couple of weeks, you know? So just need to make sure that we’re tightening up our processes that you guys are putting system controls in place, that it’s not just training that there are system controls in place for the things that are coming over as heirs because we know that there’s human heirs, but there should be system controls in place to prevent and mitigate some of the risks that’s coming over to us right now.
Rae Tompkins (14:56) Absolutely. And I can escalate this internally to see if we can identify on our end some of the ones that were missed. And then if optum as you’re reviewing, if you see any, we can investigate and if there’s something that we can regenerate or repopulate, please let us know. I know there was that one file that we processed which was on the agenda to discuss. But if these need to be reworked due to the missing verifications, please let us know.
Susan E Hulin (15:22) Yes. I guess my biggest ask to you and to you, Nico is to look in here and see what happened so that we can prevent it from happening in the future from a program and a system perspective, not a training but a program so that we know that it won’t come over if it’s not right? Yeah.
Yenny Zhang (15:44) I understand the frustration here at seeing different data not in sync between the cred file and the PSP status report. So our engineers are working on fixing this issue. But in addition to that, I’ll bring this back to my team, the engineers and ask them to put something in place to ensure like there isn’t data that isn’t in sync with each other to prevent this moving forward.
Susan E Hulin (16:11) When you solution for that then, can you bring it back to our next meeting and walk through what you have put in place? So we feel confident of the solution. Yeah. Okay. Thank you.
Rae Tompkins (16:30) There was a flag regarding source name, discrepancies. There were three New York psychologist files. All were different source names. So again, we have submitted a product ticket to our team to review for implementing a pick list. Again, agents have been asked to limit the text and we’re searching long term consistency options to make sure that, you know, moving forward. All of our agents are utilizing the same source name. So there is more consistency. Whenever you’re reviewing the file, there aren’t three populating for the same source. Thank you.
Rae Tompkins (17:10) And then timeline of checklist updates. Just wanted to circle back with optum to clarify the ask here to make sure we understand.
Susan E Hulin (17:18) When we talked a couple months ago and we gave you the full updated checklist… I have to look back into my notes but I thought that the timeline was end of March that we would be implementing the updates to the checklist.
Rae Tompkins (17:36) Okay. Yeah. Let me circle back with our product team and see if I can get an update and I can send over some information in a follow up email. Okay? Thank you. And then I think we again… this was a duplicate of the provider example of truly clean but confused why it was marked incomplete in the packet. So again, I will definitely provide more context either in a follow up email. On our next call, the guardrails that our engineering team is putting into place to ensure this doesn’t happen moving forward. But in regards to Amy Barbie, Kim, I wanted to again apologize for the error that was flagged. The agent reinitiated the file when the error was sent to our team to review. Again, this was a lead that wasn’t particularly familiar with optum’s workflow to ensure that files are not reprocessed, I outlined this via email but just wanted to reiterate again that this won’t happen moving forward. We’ve made sure that agents are aware that optum files are not to be reprocessed. This particular one was put back so that it’s not missing from platform. So there will be another one generated and sent to optum. But.
Susan E Hulin (18:45) Again, we don’t but we don’t want another one sent because it just confuses us. Okay? So, and I’m concerned about how you have to re like where it went in the first place. I guess that might, that’s my biggest concern. Do we have missing files then? Because there is no… process in place that the processors are following to ensure that when a file is done and it’s completed and it’s sent over, that we’re good. If they archive it that it was. I guess I don’t understand your process. They deleted it out and it closed it. And so the file was missing. So then you started again, I just don’t understand that.
Rae Tompkins (19:27) And that’s.
Susan E Hulin (19:29) no, I totally, I agree.
Rae Tompkins (19:31) And that was, it’s our normal process for clients outside of optum. And this is just an isolated incident. This is the one and only time that this has happened. It was just an agent who was unfamiliar with the process and eager to get the correction updated in platform so that we can make sure the file was regenerated. So again, this was a very isolated incident and something that our team is aware of moving forward to ensure it doesn’t happen. But it is our normal processing for clients outside so that’s where the confusion had come from.
Susan E Hulin (20:01) And we won’t be charged again for this.
Rae Tompkins (20:04) Correct. I’ve already worked with staff to ensure that there won’t be a duplicate charge for this provider. Okay?
Susan E Hulin (20:12) Ashley, anything else on this one? Because you initiated the email?
Ashley M Frick (20:23) No, I mean we’re internally going to have to figure out what to do now because now, we have two open credit cycles that were created for this provider.
Rae Tompkins (20:35) Okay. Please let us know if there’s anything that we can do on our end. But again, I just wanted to flag that the team has been reminded and this was a very, you know, isolated incident. So shouldn’t be any additional errors of this kind?
Rae Tompkins (20:54) Jumping ahead to file size issue. So with our current integration to optum Salesforce, we’re limited obviously with the size of the files that are sent over. So wanted to suggest kind of two solutions to the optum team and get feedback to see what may work best for your side. So there could be something put in a place called multi part messaging. And what this is it would allow optum to receive the file, but it would be split into two or three chunks. So you would receive all the information for one provider. It would include the caqh application, but it wouldn’t come over in one large PDF. It would just be split into multiple. The second option being removing the caqh application from the provider’s profile, understanding that is usually a large chunk that is populated in the cred files.
Connie (21:47) So if.
Rae Tompkins (21:48) option one of multi port messages doesn’t you know, solution there. And then obviously, we can continue conversations if neither of these options are viable for the optum team but wanted to flag these two to kind of start the conversation of reducing.
Susan E Hulin (22:02) That size. I thought we already solutioned for the large files.
Susan E Hulin (22:13) Connie didn’t we solution already and we weren’t seeing this problem. I.
Connie (22:19) Think we had on our end. I don’t know that it was.
Rae Tompkins (22:24) This was something that was flagged to us last week, yeah.
Susan E Hulin (22:27) I understand that. But we previously talked about this months ago when we first started that files were not attaching. So there was a solution put in place. And then all of a sudden now we’re seeing a problem again. So I guess I’m curious as to we did have a solution in place and why are we seeing a problem with them again? Yeah?
Niko Byron (22:49) So, there are very similar problems but they’re two different problems. So, originally the workflow was that we would just provide the PDF URL for optum to retrieve the optum it team to retrieve via an integration on their end. And then we found that there was a file size constraint of like eight megabytes, which is a pretty small size constraint. So we had to get around that. So we built an integration that would essentially from our end when a file was marked ready, post it to optum Salesforce instead of optum, it retrieving it and putting it in optum Salesforce themselves, that has a file size constraint as well. It’s a much larger file constraint or file size constraint. It’s 37 point five megabytes. So we weren’t running into it… almost at all or until recently. And so that’s why we’re seeing this now is that it’s kind of the same issue but from a different workflow and it’s a larger constraint so many less much less files were affected.
Susan E Hulin (23:51) Connie. So how many files are we seeing? Because it seems like it’s been a recurring message most recently?
Connie (24:00) Recently. Yeah. I mean, I’m getting more than three or four a day.
Susan E Hulin (24:05) So, and is it, because what’s in the packet? Is there stuff that doesn’t I mean, because we’ve had these conversations that we’re getting a lot of like insurance and plis that are expired. I mean, is there stuff within that packet that actually needs to be there? Is there stuff that isn’t… necessary data? I’m just curious, I think.
Connie (24:27) Some of it is when caqh has got lots of pages like lots of practice locations and things. So it makes the caqh larger than like 30 pages.
Rae Tompkins (24:42) And that’s definitely something that, we can take out the caqh application from populating it would still be available in platform as saved to the provider’s profile, but it just wouldn’t be pulling into the cred packet.
Susan E Hulin (24:53) No, we need it. So, Connie, how do you and Ashley, how do you want to solve for this one?
Susan E Hulin (25:09) I mean, multiple.
Connie (25:11) Chunks, I guess as long as Salesforce is also filling out the way it should.
Susan E Hulin (25:20) So from our it team, is that going to work?
Khushi Soni (25:27) Yeah, that just means, we need to include like apart from one attachment, we need to include multiple then. And when we say multiple, Nico, there will be two you were saying?
Niko Byron (25:40) The idea I think would be that it would be split. I’d have to look into this, but I think it would be, we’d try to split it from that 37 point five like megabyte size. So, like if it’s a 70 megabyte or let’s just say it’d be easy. A 50 megabyte file, you’d have two files if it ended up being like, and I don’t even know if this is something that ever gets sent over, but like 115 megabytes or something. You might have three files, but I think in almost all cases, it would be two files.
Susan E Hulin (26:08) So, I have a couple more questions. Alicia, is that going to work? And my thought is can you attach the checklist separate from the caqh to keep the two documents intact together? From an auditing perspective? And would that give us the capacity we need to move them over?
Connie (26:31) Sorry. What are you asking me? Well?
Susan E Hulin (26:33) If they have two large file sites to come back over to attach, they want to split it in two yep.
Connie (26:41) As long as they’re in the credit cycle documents, it can be.
Susan E Hulin (26:44) You’re fine with it. Yep. Okay. And you don’t care how it is if they split it in half or if they ended up having to split it in three, as long as you have the full document, yes, correct? With the same date and the same application. You’re okay with it? Yep. Okay. Thank you. Thanks.
Rae Tompkins (27:01) Perfect. Well, we’ll ensure to resolve that issue by utilizing the multi start messages and we’ll make sure obviously the caqh is still attached to the provider’s credentialing files so that everything is sent over via the Salesforce integration?
Khushi Soni (27:16) I just have one question to Kim. So, Kim.
Rae Tompkins (27:18) Will.
Khushi Soni (27:19) these multiple files be stored under the same name that’s the cbo checklist document.
Susan E Hulin (27:27) Yes, yes. Just put them on the same because then Alicia can pull it all together when she gets audited. All right. Okay. Alicia, I’m just confirming with you that’s okay. Right? Yep. Okay. Thanks.
Rae Tompkins (27:46) Perfect. And I know we have two minutes. So happy to continue conversations. I have feedback on the remaining agenda items but wanted to review the complete update action plan from an email that was sent on March eleventh. The question was what measures we’re putting into place to stop the wrong data from being fed over to us. Again. This goes back to the plan that we put in place regarding identifying the agents through error logs, emails, meeting agendas, to make sure that they’re aware of any issues that have been flagged to us from the optum team. There are weekly team calls every Thursday and a lot of the areas I wanted to flag on the specific topic regarding the reviewed app, complete date were… from files in October, November and December prior to the sop being updated. So we’re hoping that issue as you’re reviewing files more recently has alleviated but wanted to confirm with optum. Are you seeing that in more recent files as well?
Susan E Hulin (28:50) I don’t know the answer to that would be from Ashley and Connie, but is there something in your system that you can do? So it doesn’t occur before it comes over?
Rae Tompkins (29:04) Yeah. And that’s part of the technical product solutions we’re investigating through the pick list and making sure that their guardrails are put into place.
Susan E Hulin (29:13) So, like there would be a logic like if this is the attestation date, it can’t the complete date can’t be… before the attestation date? You know what I mean? Because we put those measures in place in our systems like if this, then that, then it can’t you know what I mean? It says, no, it’s a no go. So, yeah.
Yenny Zhang (29:38) We’re making some changes to ensure that this reviewed app completion date isn’t so like ad hoc manually set and some of those changes are being made in the next month. So we’ll let you know when that product release happens. So there’s more automation and guardrails moving forward. So we are working on that. I understand that it doesn’t make sense to have that attestation date after?
Rae Tompkins (30:04) Okay, perfect. Thank you. Thanks, Judy. And then just regarding the expired plis, why is medallion sending the plis in the back end instead of just one active from the packet? I think Connie, you would just, and I just spoke about this regarding them sending coming over via the caqh application. So if they’re provided there, they’re more than likely coming through with the completed application. So there may be more than just the active on file, it’s just pulling directly from caqh.
Anna Jacobson (30:38) We don’t want that info. Yeah, we don’t want that info feeding. I mean, that’s what’s feeding into our system is the expired like caqh data. It seems like so that, I mean, the one that’s verified in the packet should be the one that’s feeding.
Rae Tompkins (30:53) So, it was feeding over through the Salesforce integration with the expired information, but not necessarily the one that was updated and current… correct? Okay, let me take a look into that. Did you happen to have, okay, you provided the?
Anna Jacobson (31:09) Yeah.
Rae Tompkins (31:11) Let me take a look at that and see if we can figure out what exactly happened. Obviously, the more current one should be pulling over directly to Salesforce so we can look at that and see what exactly happened with that particular example. And then.
Susan E Hulin (31:24) if you can just do an overview of all the data that’s coming over just to ensure that we’re not, you’re not feeding over bad data or expired information to us if something changed as you guys were updating, if something?
Rae Tompkins (31:38) Yeah, absolutely. Something.
Susan E Hulin (31:40) Broke or whatever. Just to double check to make sure that you’re not feeding over again, bad data or expired information to us?
Rae Tompkins (31:47) Yeah, absolutely. I’ll take this back and see what I can figure out internally. I can definitely provide more context. Thank you for following that example. And then for there was one provider, the file came over clean, but there were no dates on the checklist. So for ftca, we check a box, so it doesn’t provide that information. So that is expected behavior that one particular box in the example was missing.
Anna Jacobson (32:18) Connie, was this the issue. This is the tort one.
Susan E Hulin (32:21) But is there, is what’s on the checklist? The?
Rae Tompkins (32:26) Checklist just says that it’s an ftca, it doesn’t show any kind.
Susan E Hulin (32:31) Of verification or anything?
Anna Jacobson (32:32) Right. So.
Susan E Hulin (32:33) It’s not, it’s not going to pass an audit or anything?
Rae Tompkins (32:37) No, it.
Susan E Hulin (32:38) Won’t.
Rae Tompkins (32:38) So,
Susan E Hulin (32:39) we’re going to have to revisit this element.
Rae Tompkins (32:42) Okay. Let me take this back and see if I can level set with the operations team to ensure that I understand if that’s expected behavior and if it is share that with optum or, you know, investigate to see how we can make sure that that’s populating directly into the checklist. So I will definitely circle back on that as well.
Susan E Hulin (32:59) Okay.
Rae Tompkins (33:03) Great. I know we are a little bit over time, but are there any questions from outside of the agenda items we discussed?
Anna Jacobson (33:11) I think we passed over one of the pli, the one right above the tort,
Rae Tompkins (33:15) oh, my apologies, information with… older caqh versus newer. The packet had old attached, but there was a more recent attestation, caqh was from December 20 25. Pli verified that new caqh had been pulled in and housed the medallion system. See, I think that was the one that I was going to flag to the team to investigate why older was pulling in versus newer. But it looks like based on the audit that I conducted prior to our call, we just wanted to clarify that the pli was verified via the caqh application and the caqh page attached with the updated attestation on the top of the, where the attestation date is captured. So, but in this instance, it wasn’t fed over to Salesforce correctly? Is that what the flag was, the?
Connie (34:09) Flag on our end is the fact that you have a screenshot for the pli, the updated pli, but the caqh that is attached is from December.
Rae Tompkins (34:19) So, there was an updated copy in the cred file, but the caqh application had an older copy pulling in, correct? Okay, perfect. Thank you for that confirmation. I’ll work with our ops team to see what exactly happened therein. What we can do to prevent that from happening in the future state.
Rae Tompkins (34:40) Thanks, Connie. I.
Khushi Soni (34:43) Also have a couple things to mention. So, Nico, I’ve actually sent you an email. I need a few sample records for the clean initial cred and re cred files, as well as the unclean examples if possible.
Khushi Soni (34:58) If we can send today, that would be a great help. Sure. Thank you. Yeah, thank you. And another help is that there are some cred files that we need you to re, trigger so we can send you via email if you can do that for us? Yep.
Rae Tompkins (35:15) Sounds great.
Khushi Soni (35:16) All right. Thank you.
Rae Tompkins (35:22) Well, thank you so much, everyone for your time and I’ll be in touch regarding the items we discussed in a follow up email. And then again, we’ll continue conversations outside of our biweekly syncs and definitely keep the optum team updated on the measures we’re putting in place to prevent some of these errors from occurring again. Thank you so much. Everyone have a good one.
Khushi Soni (35:43) You too.