VanRein Compliance Podcast

AI Boom: Navigating the Compliance Minefield

Rob & Dawn Van Buskirk

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 21:16

Send us Fan Mail

AI is already inside your business, and the uncomfortable truth is you might not even know where. Copilot in Microsoft, Gemini in Google, bots layered on top of bots, and “quick tests” in personal accounts all create real compliance risk the moment sensitive data enters the mix. At the same time, regulation is tightening fast, which means the gap between how teams use AI and what auditors expect is getting more dangerous by the week.

We walk through what’s changing globally with the EU AI Act and its risk-based tiers, then bring it home to the US reality with HIPAA compliance and the coming pressure on the HIPAA Security Rule. We talk plainly about what enforcement-ready security looks like: multi-factor authentication everywhere ePHI touches, encryption in transit and at rest you can prove, audit logging that shows who did what, and risk assessments that aren’t just checklists. We also dig into vendor accountability, why Business Associate Agreements still matter, and how to validate a partner’s security posture through trust centers, real certifications, and subprocessor transparency.

Then we get practical about AI governance. We share the guardrails we rely on: mapping data flows, keeping an AI tool inventory on your supplier register, setting an AI usage policy your team can actually follow, and using a human-in-the-middle approach to reduce hallucination and patient-safety liability in healthcare AI. If you’re trying to stay audit ready for HIPAA, SOC 2, ISO 27001, or HITRUST while still moving fast with AI, this gives you a clear path forward.

Subscribe for more compliance and security guidance, share this with your leadership team, and leave a review if it helped. What AI tool is already embedded in your workplace stack?

Thank You for Listening to the VRC Podcast!
Visit us at VanRein Compliance
You can Book a 15min Call with a Guide
Follow us on LinkedIn
Follow us on X
Follow us on Facebook


AI Meets The Compliance Crunch

Rob

Today we're talking about the collision that's already happening inside your business. Whether you see it or not, AI is disrupting how we work, right? It is moving fast, but regulation is moving faster, and you are right in the middle.

Dawn

We're in a world now where deploying AI isn't just a tech decision, it's a compliance decision. Between EU AI Act and upcoming changes to the HIPAA security rule, which is going to happen pretty soon, hopefully, the margin for error is shrinking fast.

Rob

And here's the kicker. Most companies are still treating AI like a tool when regulators are treating it like a regulated system.

SPEAKER_01

Welcome to the Van Ryan Compliance Podcast with Rob and Don. We help growing teams reduce risks, build trust, and stay audit ready without the overwhelming.

Rob

I'm Rob with Van Ryan Compliance, one of the co-founders here.

Dawn

And I'm Don, co-founder as well with Van Ryan Compliance.

Rob

Yes. And today we are going to dive deeper into the AI boom and how navigating the compliance minefield impacts. Business. You like that. Yeah.

Dawn

Yeah.

Rob

Well, it is a good one. It is a minefield. Or a lance or a landfill. Didn't have people want to do, right? But let's give AI as credit, right? Here's what we're seeing. We're seeing automation at scale. We're seeing a faster decision-making process, real productive gains, and AI is moving from a plot into production.

EU AI Act Risk Tiers

HIPAA Security Rule Tightens

Dawn

Yep. But every one of those deployment uh deployments comes with risk exposure, documentation requirements, and oversight expectations. This is no longer optional. And as far as the regulatory squeeze, let's talk about the EU side of it. EU has already, they've already implemented the AI Act. It's already leading globally. We are behind of the United States is that is. They have built this. It's a risk-based structure, and their structure is basically banned, high risk, limited, or minimal. Leave it to the Europeans. They're very black and white. And by August 2026, high-risk systems are fully enforced. That means hiring AI, healthcare AI, and financial decisioning, these are all going to be under uh heavy scrutiny. So now, Rob, uh tell us about the US and the HIPAA side of things. That's the EU side of things.

Rob

Yeah. So on the US side of things, a little bit different. Obviously, we are governed more per state. I know California and Colorado have a couple of AI laws. There's nothing to date as of April, what do we record this 27th of an overall AI law? However, what I'm excited, I think Don's excited as well, is there is some legislation that's moving forward. And also the HIPAA security rule will be changing next month. So about a month from now, we will have a new HIPAA security rule. There's nothing a state's explicit AI governance in there, but there are some AI enhancements. But let's see what the final rule looks like.

Dawn

Right.

Rob

The big things that we're seeing within the HIPAA security rule is first of all, MFA needs to be everywhere the EPHI touches. That's a key thing. Everywhere and anywhere the data is touched, you have to have multi-factor from your computers to the platforms you use to any type of chatbots to anything, you've got to have that. Encryption and transit and rest, this has been a longtime standard. However, it is now going to be required, and you will be audited against that. We audit our clients to this level already, which makes it a lot easier. The annual risk assessments, real ones, not checklists, kids. We can't be doing that anymore. We've been doing this for years for our clients. But what's going to happen now is you're going to have 60 days to be audit ready where the government will be doing sampling. You have to have an audit security report from your auditors to show what you actually have done. Interaccountability risk register. I think Don, that's your favorite one. It's the risk. We just went through LMRs at Van Ryan. Listed all our, we listed all of our partners out. We list the services they provide, and we list the cost. And lo and behold, there's some cost savings in there. So that's another good thing.

Dawn

Yes.

AI And HIPAA Collision Risks

Rob

There's always money and software to get cleaned up and cut a little bit. The next two I'm excited about, actually the next three. First of all, is the vendor accountability, is a big one. You need to make sure you're not working with vendors. You need to make sure you're working with true strategic partners. Period. No more vendors. Everybody's tied in this together. If they're going to act like a vendor, you need to get rid of them and move on. Plain and simple. The next two pieces is the pen testing and vulnerability scanning. We are doing this at Van Ryan Compliance. This is a new service for 26, and we've got multiple companies already signed up and already moving forward. We're very excited how we're using AI to go ahead and do the penetration testing with Betty and our team as the human in the middle, right? To verify everything. And because there's a legal piece to that, and we're going to make sure we do it right. And then the last piece is your recovery expectations. You got to get your environment back up within 72 hours. That's going to be a law. Honestly, you better get that up within 10 hours or eight hours or six hours, whatever your business can can uh deal with, the pressure on that. So that's kind of the key pieces of what we got to really look at. So now as we layer AI on top of that, that's where things get a little funky, don't they, Dawn? They get a little funny. Yes.

Dawn

So this is AI. Imagine AI and HIPAA colliding. Interesting. I don't know. That's it.

Rob

Is that another A on the back of HIPAA?

Dawn

So let's let's break this down. So, you know, data exposure, AI tools ingest data. If that includes ePHI and it's not controlled, that can be considered a breach. Okay. So that's that's a big thing here is is there are AI tools that do ingest ePHI. They are HIPAA compliant tools. We're not talking Chad GBT folks. Just to be clear. And then Rob, tell us about the other shadow.

Rob

Yes. I was looking at the shadow.

Dawn

Yeah.

Rob

Your employees are already using it. It's on their phones. It's Microsoft Environment, Copilot, Gemini and the Google environment, right? GBT, Claude, ClaudeBots, Maltbots, all the bots, any bots. Everybody has plugged in an LLM. Every platform has plugged it in and they're selling it to you, right? That's already there. It's already, it's already in your ecosystem. You just don't know where it is. So this is where there's a problem when there's no audit trail, there's no access control, govern governance, I guess governance, you could say. And that's a hip of failure. We don't want the hip of failure. Those are the key pieces. Yep. We need to talk about the inaccurate outputs. That's a bit of yes.

Dawn

Yes. So yeah, not only is it just around and you're entering whatever you are entering and to to it, giving it to it, or teaching it, I guess I should say. The AI hallucinations and healthcare, no, that that is not only putting healthcare data in an AI that's not HIPAA compliant, but it's basically like think about if it hallucinated.

SPEAKER_01

Yeah.

Dawn

That's a huge liability. I mean, that that's that's crazy. Even even if it's HIPAA compliant and you're using, you know, you're a provider and you're you're banking on the data it's going to provide from ambient listening, if you will, like in a clinic. Oh yeah. Think about the liability if it spits out the wrong symptoms, diagnosis, all that stuff. You definitely have to have a human in the middle, um, you know, for that, especially. So um it's scary.

Rob

Definitely. The lack of logging. You already you already talked about that. Um, making sure is everything logged, is all the audit controls actually logged, everything really dialed in and tightened up is a big thing. Obviously, the AI vendors, which we don't want, we want true partners. You got to have a business associate agreement. VAs are key. They're still a thing. They're actually gonna get more enforced, meaning we're actually gonna do full audits of all BAs, which we've been doing. And the government's been okay with it, but now it's like, I want to see them all. Let me see all your contracts. So these are now required, which is uh required evidence, not just it's always been required to have it. It's meaning that, oh, I've been on business with this company for five years, and I just have a BA for the last couple months. No, it's since five years ago you have that. Yeah, security validation and ongoing oversight. Those are the key pieces that you got to make sure. There's a fly. That's a vendor risk right there. If you got to make sure that's an AI bot flying around. No, I guess oversight and no expectations. Those are those are key, key pieces.

Dawn

Yeah, yeah. You already mentioned the lack of logging. So HIPAA requires audit controls, logs, access logs, all this stuff. AI systems without logging, without knowing who did what and what tracking the chats or what, whatever, the just the logging, you know, the the date timestamp. It's just not compliant. There's nothing that's not. So, you know, the danger is right now is everyone and their brother and their uncle and their aunt and their grandma and everyone is creating some sort of AI bot. And it's not just the Claude and the Chat GPT and the Gemini and the co-pilot. They're creating AI bots that work with those, on those, between those. I mean, it is crazy today. I mean, you could just go out there and it it'll give you a whole list of well, I plug into this and I plug into that. So now you've got whatever on the back end, but then you've got these other little bots that are within it inside of it. So now you've got more liability because now you don't you have the the chat, you know, or the main LLM, right? Then you've got all these other bots within that. So you have to really understand where the data is flowing. Is this all compliant? I mean, uh, is this bot that's in there doing these things? Where is that data? Is it you know, where's that data going? And so you can see here, it there's just you get excited. Oh, this can do this and this can do this. Oh, I'm gonna plug into this and plug into that, but be careful.

Real World AI Deployment Breakdown

Rob

Then they become agentic and they start talking. Bots talk to each other. It's an old terminator thing. This doesn't look good. So that's why we're gonna go into the next segment of a real world breakdown. When the bots take over. Wait, no, no, we're not doing that. Let's let's kind of walk now. I'll feel like okay, we've gone through the steps. So it's kind of what's like a real world experience, right? What does it really look like? You know, if you're if you're anything anyone from a healthcare SaaS company, we'll work with providers, we work with answering services, we work with IT platforms, we're with technology platforms, we're with MSPs, MSSPs, we're with all these type of different companies, right? What if they deploy a chat assistant? What if they deploy a voice bot that takes phone calls? What if they deploy a bot that would be documentation, security documentation, and there's internal automation and there's some clinical support for our providers, right? They're just gonna turn it on. Because you know, like the folks at Epic, which is a big one, yeah, or Solution Reach or any of those other, they're like, yeah, we're just gonna turn it on. 10 bucks more a month. Where's the data going? How do you know where it's going? Have you put that on your risk register? Have you asked those questions? Is your data getting ingested into their LLM and actually uh being used to train their environment, right? Yeah, we've got that formal governance, which is key.

Dawn

Yeah. So no formal governance, governance, we keep saying government, governance, then data flows aren't mapped, vendors aren't vetted, and there's no logging. You're expecting Yeah, boom, exposure, potential exposure, risk, risk, risk, risk. Um, and you know, it it can all be done very um, you can see how easy it can be done. Someone won't know. They think they're they're trying to automate or do something better, make their job better, and they don't they don't know that what they're doing is causing a huge exposure. Um, and that's what AI governance is is all about.

Rob

Yep. And it it I still believe that people are good, right? So like they generally want to help the process and they're going to use Claude or GBT or something to do that. Example, our team, right? We're using AI, we're using a lot of Claude lately to verify and review documentation before it goes into our QA process. And so we've put the guardrails around that and we've already made sure that the data is ours. We've set up our business accounts. We're not sharing the data. We've trained the team how to use it, the steps to go through it. And I want you to uh listeners to do that as well because that reduces the incident exposure and the legal exposure. We have to remember no matter what, all these bots come out and everything comes and has some fun and they're all chatting. The law hasn't changed. The accountability is still to your leadership, and the accountability is still to you who actually enabled that bot. Nothing has changed. That hasn't changed. We just have a new toy over here, right? And when you go into OCR investigations and you're reviewing, you know, what's what's going on and review the incident or review the breach, what's gonna happen is that they're gonna look for the human in the middle and where's it been verified? That's a big piece.

Dawn

Yep.

Rob

And the worst part, they didn't even know it was happening. A lot of folks, either from from a management standpoint, a leadership standpoint, or even uh just a technical standpoint or an employee standpoint, they're not sure what just happened. So then they're like, wow, what do we what do we got to do with this? How do we make this better, right? How do we fix this? And those are the things that we're really looking at. Uh, and those are the big keys.

Map Data Flows And Control Tools

Dawn

Yep. And every AI tool, um, approved or not, um, if it's exists, you track it. So I tell our customers, you need to put your on your third-party supplier register. You need to put what AI tool it is, if it's paid, hopefully it is, and what data is flowing through it. And if it's hooked to anything, I mean, you know, on and on and make sure that tool is is it secure? You know, what are some of the certifications? What is what is the security of it and that type of thing. And the other thing is an AI usage policy.

Rob

Oh, yes.

Dawn

And this is basically laying out to your team, these are the tools we're using, and these are the ones you need to use. And you cannot, what is not okay is not putting it into whatever's easier for you, meaning your personal account, you know, your phone. I'm just gonna do this real quick. Nope. It needs to be in the paid account. So it's training your employees to use the company approved AI tools. So that's that's definitely what you need to do.

Rob

Because if you can't map it, you can't control it.

Dawn

Yep.

Rob

You don't know what that is. So that risk register is key. So, like a takeaway for everybody listening is like sit down and just use a spreadsheet, use a Google Doc, Word doc, whatever doc, right? And just map down here, you know, get your team together and just map it down. Where does the data go? Data flow. Deal with classic sticky notes. I'm gonna do a lot more sticky notes lately in my whiteboard over there. It's kind of fun.

Dawn

Yeah. Yeah.

Rob

The next piece is business associate agreements. We are actually going through our audits now. We're reviewing client BAs and actually looking to see how old they are because they're all gonna need to get updated here. Well, in the next month. Meaning that we do laws, meaning that we're gonna have to refresh everything, which we already have everything we need. Um, through the audits, basically starting from May 4. Do we have everything ready because you have 60 days to be auditable? Uh, and then everything is gonna be set by end of the year.

Dawn

So and remember, as far as validating and vetting your vendors, you can always go to the bottom. A good vendor is going to have, good partner is going to have a trust bar or security center and go check it out. You can check out their certificate, you can go check it out and search it. Because if you know what happened with some of these, some of these uh platforms that did false SOC 2 certs, they're they're false. So that that website will be like, oh yeah, I'm SOC 2 certified. You know, so you you should be able to go and validate their security. There's ways to do that. You can find out if they are actually certified, examined for SOC2 or certified if you're ISO. So don't just take take a HIPAA seal or something just like, oh, they have a HIPAA seal. That means nothing. Go ask them for their last, like a risk summary report. Go ask them for some sort of summary report that says, hey, we've done these things, these are the controls. Have you know, give g have them give you a policy? What is, you know, where's your security policy, that type of thing. Because don't just go off of their website and what it shows and that, this, and that. Dig into it. This is this is your customer's data. This is nothing to joke around about. Yeah.

Rob

And I always like to look at subprocessors. Like who, where's the data going, right? Um, what platforms is it going to? How is it flowing? How's it getting processed? What type of data is it here locally? Is it international? Stuff like that. I always like to look at the subprocessor pieces as well. And then moving into, you know, we've talked about the BAs and locking down your partners, making sure that they are they have only the access they need, but moving into the multi-factor encryption, logging, and access controls. Those are big. These are not just nice to house, these are going to be required along with penetration testing, which is gonna be required uh annually. And and then bull scans. We're gonna go quarterly with our clients. It's like you gotta, you just gotta do this. I like you do it monthly. You can do it daily, but it gets a little bit heavy, but you can do that. You can do that from a uh perspective.

Dawn

Adding that AI governance, this is gonna be what most companies have no idea what this means. Well, I have an AI usage policy. I'm good. Nope. The governance, there is there is different levels of AI governance. Um, and it depends on a lot of different things. So we would, you know, basically do a scoping call, what AI you're using, what data you're is flowing through that AI, all these things. And and it kind of helps us understand what level of AI governance you need. So it's not only policies, there's also risk register, there's an approved tool list, which is your, you know, your supplier risk register, and most importantly, ongoing oversight. It's not a one and done. We're not checking the box, folks. This is something that needs to be reviewed ongoing. And I'm saying AI governance is probably more of a quarterly, if not, if not earlier than that, um, because AI is changing every second. Um, you know, you can do a HIPAA audit annually, yes, because HIPAA doesn't change. I mean, it it doesn't change daily by minute, but AI is changing so rapidly that the AI governance is gonna have to be an ongoing, ongoing oversight. Yeah.

Training Takeaways And Next Steps

Rob

Yeah. And this is where companies get behind, right? They need to test the environment. Those are the key pieces. This is where companies are really sure what to do. This is why penetration testing and vulnerability scanning are not optional. You know, you got to do it. You need to do it, and you need to understand the rest of your environment, especially when they is expanding as an attack surface. Because now there's another level of uh of another attack surface, right? There's another layer of the cake. Before you get to the good, you know, GUI center, right? Is there a layer that will be attacked and be exploited.

Dawn

So and training. What do we do well here at Van Ryan? We train. We train. If you haven't already, take our free AI training online. We do have a more expanded version that is that is a paid version. But our our free AI training online, it has some very, very good nuggets of information. It will help you and your team really understand what's allowed, what's not allowed, why and what to do, that type of thing. So um that's a key piece.

Rob

And it gives good good nuggets. We like our appliance training. It's free. It's free. We like free. Go do free. It's good. But this really kind of kind of to kind of sum this up and take some actionable items away, right, from this podcast, is really focusing on what the truth is. You know, the truth is AI is not the risk. It's not. It's not the risk to your job, it's not the risk to your business, it's not the risk to the world. But what is the risk is uncontrolled AI risk. That's the risk. Not knowing what it's doing, not understanding where it's going, not understanding how does it impact uh the environment, how does it impact my my HIPAA compliance verification or my SOC2 examination or my ISO certification or my high-trust certification. Those are the key pieces.

Dawn

Yeah. And ongoing compliance, including AI governance, this is all what keeps you going. So you're not, it doesn't stop you in your, you know, in, you know, stop you and go, oh goodness, oh gosh, we have an issue. It keeps everything going and running smooth in your business. Yep.

Rob

Yep. Yep. Yep. Definitely does. And then if you're not deploying AI and you don't have a governance around it, you're exposed. That's a key piece. And this is where um, this is why you're here today. From Don and I and know that you have a helping hand here to help get through that. Those are key pieces. Well, we appreciate everybody joining us in this week's Van Ride Compliance podcast. If you have questions or concerns or comments, drop them in the in the comments section down there. Our team reads through them. If you like and subscribe, which we hope you do, pass it along to others so that they can enjoy the podcast as well. And we thank you for joining us this week. Bye-bye. Bye bye.