VanRein Compliance Podcast
Learn how you can secure the future of your business with a clear plan to reduce your risk. We discuss all compliance and data security matters of SOC2, ISO27001, HIPAA, GDPR, CPRA, NYShield, Texas HB300, ISO27001, HiTRUST and include life stories as well. It's NOT just a boring BizCast. We also talk about our Family Business and how you can start your own Family Business that will reshape your future.
VanRein Compliance Podcast
The AI Governance Playbook with Bennie Cleveland
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
We sit down with auditor and risk leader Bennie Cleveland to unpack how to make AI defensible in the real world. We cover governance, healthcare and privacy frameworks, modern attack patterns, and the playbooks that separate confident teams from lucky ones.
• defining AI ownership, approvals, data scope, monitoring and explainability
• building an AI inventory and supplier risk register
• mapping to NIST CSF, HIPAA, GDPR, SEC expectations
• deepfakes and social engineering expanding the attack surface
• darknet monitoring and proactive exposure checks
• running tabletops for ransomware, data loss and web compromise
• human in the loop and prompt discipline for high-impact decisions
• common audit gaps in IR, BCDR and communications
• vendor AI due diligence and data transfer controls
• buying fewer tools with clearer purpose and guardrails
Thank You for Listening to the VRC Podcast!
Visit us at VanRein Compliance
You can Book a 15min Call with a Guide
Follow us on LinkedIn
Follow us on X
Follow us on Facebook
Welcome to the Van Rien Compliance Podcast with Rob and Don. We help growing teams reduce risks, build trust, and stay audit ready without the overwhelm.
Dawn:Here at Van Rien Compliance, our auditor is by every measure one of the best in the business. Bennie Cleveland joined our team last year. Bennie has more than two decades of experience in cybersecurity, auditing, and enterprise risk strategy.
Rob:Bennie's experience crosses a lot of industries: healthcare, financial services, life services, SaaS, higher education, and government. Today's podcast gets into the nitty-gritty of AI governance and compliance.
Dawn:Bennie is here with us today on the podcast. Thanks, Bennie, for joining.
Speaker 1:Thanks, everybody. And thank you, Rob and Don, for uh joining me here.
Rob:You're welcome. We're we're excited to finally get you on the podcast. It's been it's been a bit, right? It's been too long. Long time coming, right? Yes. Well, you know, better late than never. Um, so we're we're excited for you to be here. Obviously, we're excited for you to be here um at Van Rien Compliance or bring your expertise that you do. Uh and now you've met with you've met with a lot of our clients, obviously already. You've been working with them. Um, but why don't you kind of give a rundown of who you are and uh and what you bring to the table because you bring a lot.
Speaker 1:So the floor is yours. Yeah. Certainly. So first and foremost, thanks, Robin Don, for bringing me on to the podcast. Um, don't want to be redundant, but over 20 years of experience in cybersecurity, audit, enterprise risk, uh, across healthcare, financial services, life sciences, and higher education. Uh, my work focuses on helping organizations build secure, compliant, and audit-ready programs, especially as technology evolves. Uh, and so obviously I have a plethora of certifications. I won't go through all of them, uh, but they span in the executive cyber leadership, the governance audit, enterprise risk, and AI governance uh at the large scale and delivery of operations. Uh, so that's me in a nutshell.
Rob:Now you're being modest about your certifications, Pat. Because when we don't have the nice Van Rien screens on, Vinny, you'll have to paint a picture for for everybody. Has this wall of just plastered certifications uh and everything. So it's quite impressive. And and obviously you've earned them. You've done the hard work, you've worked for that. So thank you. Yeah, yep. Oh, fun. So where, you know, as we as we talked with Dr. Howard uh a week or so ago, you know, we really talked about the pause authority and the and the human in the loop and everything. Um, you know, and she's coming at it from um, you know, a consultative kind of HR realm. And obviously you as an auditor hit and aim for AI on a as an auditor. Um, you know, for from your perspective, or in our perspective as well, but your perspective as an auditor, what is AI governance? What does it mean to to Bennie Cleveland when you go into a small, large, and enterprise organization?
Speaker 1:Excellent question, Rob. So, you know, from an auditing perspective, AI governance is a structured way organizations ensure artificial intelligence is used securely, right, and responsibly and in compliance with laws and business expectations. In simple terms, what are AI tools approved, right? Uh, who owns the AI, the AI risk and decisions, and what data is being used, right? We also want to understand how the specific AI activity is monitored and logged. Uh, what is the outcome and how can we explain those outcomes, right? And that's basically for regulated environments. So if we think about it, if AI touches health data, personal data or financial data or employee information, it immediately becomes a part of our compliance environment. And so for the healthcare and HIPAA-regulated organizations, AI governance becomes uh the part of Pi or anything that touches AI as a whole.
Rob:And what are always those triggers? You know, Don, you talked about earlier. It's like when you you see you've scanned, you've you've done this a while, you've scanned thousands and tens of thousands of documents, but like, like what is the one nugget from a governance standpoint that you look at for uh as you're reviewing evidence for clients?
Dawn:So what we try to do with our customers is is really this comes across, AI comes across in like a third-party supplier risk register. Um, we really want our clients to to document AI because if if if you ask people, they say, oh, I use this, I use that. A combination of, well, hang on a second. First of all, is it the free version? Is it a enterprise version? What department is using it? Do you have any guardrails around it? And that's the one thing is that customers usually say, well, uh, no, I don't have any guardrails around it. So this is where um, you know, we we we set up the program of let's let's understand what AI you're using, what data is flowing, where, where it's stored. And um, so this this kind of ties into um, you know, we can when we do uh talk about AI governance, AI assessment, um, kind of that kind of setting up that so then we have the scoping. So then Bennie can go in and say, okay, this is, you know, let's talk about this, let's work through this, let's let's talk about the data flow um and that type of thing. So yep.
Rob:But that's the keys. Those are the key pieces. And then Bennie, what do you look for as a as a seasoned pro? Yeah.
Speaker 1:So, you know, obviously, you know, the top of the talent is AI, but we're really looking at the capabilities of AI, right? Um, how does AI process that sensitive data? Uh how do you you know, how does AI influence those key decisions? Uh, and and how does it introduce cyber and privacy risk, right? So those seem to be top priority. And obviously in 2026, regulators are no longer asking if you use it, uh, they're really asking about how do you govern uh the tools that you're using.
Speaker 3:Yeah.
Speaker 1:And so as we understand what AI governance really is, the next question becomes what makes it defensible, right? How can we defend uh the overall environment that we manage?
Dawn:Yeah.
Rob:Well now, well, wait a minute. You mean we there's still we have legal responsibility, Bennie? But GBT did this, not me. Feels like a movie. How real is I robot now, right? We need Will Smith still. Yes. Yes. It it it is scary, not scary, but it's it's eerie when you we we you play with a lot, we all play with different levels of bots and from you know from claw to GBT to Gemini to you know, malt bot, cold bot, whatever bot. And you you when they start to have that personality, you're like, okay, this is you can see where it's gonna go, uh, which is which is interesting. Um and scary too. Yeah. The the legal piece is important because I think we've touched on that before, is um you know, we you can't just trust what AI gives you. You have to make sure you put it in the guardrails because ultimately leadership is responsible or ownership or the board or even down to the down to the frontline worker. Um it you he or she could make that mistake and uh just think that, oh, that's a great diagnostic, you know, diagnosis of an issue or a problem or a health problem, and then um just put it in the chart on a healthcare side. And we all know uh we love and we we love all our healthcare providers, but we work with a lot and we know they are very fast-paced and they're very rushed, and they do make mistakes. Fortunately, there's a lot of mistakes within healthcare. Um that's why the malpractice insurance is so high. And I know healthcare is a target, massive target for AI right now. Um what do you what do you see, you know, in in especially on the healthcare side, so if we do so much HIPAA, where do you see um like where do executives in the healthcare space want to take AI? And then where does like middle management and and uh you know the frontline workers, where do you see your auditing, Bennie?
Speaker 1:Yeah. So I think for obviously for 2026, uh, you know, uh the HIPAA property laws are always top priority. Uh also the cybersecurity standards, um, and again, this is not like we're recreating the wheel. This is kind of uh just reinforcing a lot of the standards. So, for example, the NISTCSF uh seems to be top priority. How do we integrate that framework into kind of what we do from a day-to-day perspective? Uh there's a lot of privacy laws, um, you know, uh locally, nationally, and also international uh laws and regulations that have to be followed. Uh one uh in particular is GDPR, right? How do we handle the data? How do we send the data across the US boundary? And then last is the financial and state regulations. Uh there is several standards, um, especially with the SEC, uh, how we handle uh you know the use of AI and the sensitivity of the data that's being uh uh transferred from A to B. So, you know, so once we understand how the AI governance uh works, uh the expectations around the governance, um, again, we need to set some type of standard or structure around defending uh against various attacks uh that kind of come into play. And so one of the things is, you know, we talk about human in the loop. Uh really what we're really speaking to is, you know, using a human to intervene with the actual outcomes, right? And so instead of allowing Chat GPT to render an output, we're gonna have a human to kind of step in there and say, you know what, that doesn't seem right. And so, uh, but we also have a an uh adverse reaction to that as well, is they could be wrong, right? Uh they could be biased, right? Uh they could have a personal vendetta against uh that particular outcome, and now we still have a bad outcome overall, right? So there's a lot of tangible components that go into the understanding of how we're going to be using AI and certainly the governments behind it.
Dawn:Yeah. Where do you see the the NIST CSF? Um, you know, do you see it expanding on the legal requirements over the year? Or do you think that maybe next year we'll see just more stringent uh like requirements and restrictions, I guess, with the NIST? Do you see that changing or um those controls changing at all?
Speaker 1:Or excellent question. Um, certainly I do think that the NIST CSF, I think they will render a 3.0 version. Uh, I would, I would imagine, if not this year, certainly next year, they may present a draft version uh that would kind of highlight some of the AI tangible items uh or the integration of AI into the overall framework. Uh one of the first frame uh uh domains in this uh CSF is the governance, right? So as we're speaking on the governance of AI, I do think that you will see uh a very intensive um component that they will be adding in the near future.
Dawn:I was thinking that too. I I think we're gonna see an evolution of the the requirements, the controls. Um, and I'm hoping that spills over into HIPAA.
Speaker 1:Yes.
Dawn:That is a very old law.
Speaker 1:It's it's funny you bring that up because when we think about HIPAA, right, it really points to the NIST framework, right? Yes. And so there will be a lot of mapping, right? You're gonna see a lot of ISO-disc uh mapping where a lot of organizations will use all of them, right? Even the CIS 18, which is another really, really stout framework that really outlines uh the expectations for cybersecurity and with the adoption of AI.
Dawn:Yeah. So here's a question. Have you seen um, because uh a lot of the a lot of the work you do, you've done a lot of enterprise work. Have you seen um have you seen any cyber attacks in regards to AI? Um have you seen anything recently or or do you foresee it in the future um with all these AI bots out there? Um yeah. Kind of talk about that. Like what um, you know, as a company, small or big, I mean, how do you rein that in? How do you manage that? And um, you know, uh, I know the governance, but but how how do you, you know, with everything going everywhere, can you can you stop that? Is that is there a way to to monitor that? I that's a loaded question, I know, but have you seen lately?
Speaker 1:No, but that that that is an excellent question. Um, it is the elephant in the room, right? I think the biggest uh question, or I guess it's not a rhetorical question, but basically it I think the bigger question is what's been approved, right? We do see a lot of organizations uh utilizing AI tools, but they have not been approved. They have not been out, they have not been tested into their particular environments. Part two of that, yes, we have seen several security attacks. Uh one is what we call deep faking, right? Uh where we are impersonating individuals using AI, right? This is nothing new, right? Social engineering has always been out. Um obviously everyone knows about the phishing, right? There's quishing, fishing, and smishing, right? So that that there's various attacks that have already exist. When we add the AI element on top of that, uh we're just expanding that capability even faster, right? Yeah. And so we can actually impersonate with the voice, we can impersonate with your footprint, and we can also impersonate with what you have, right? Uh and so there's there's there's there's variances depending on the capability of that person using a you know the AI tool. And also if the enterprise is vulnerable, right? We do see a lot of organizations, they may not have the appropriate garb rails, right? Uh, which makes you extremely vulnerable. Uh, there's been several attacks um prior to the emergence of AI uh that were AI generated. It's just now AI has been brought to the forefront.
Rob:Two, we've kind of forgotten about the darknet, um, which is very alive. It's even more alive and well now. But remember, we we've, if you looked in the Bass past five, maybe 10, maybe I say 10 years, we're like, okay, as your information on the dark net. You know, there's thieves are taking information. We know that a full Z or full healthcare is worth about 800 USD on the black market and all that. And then um people said, Oh, there's two internets. I was like, well, yeah, there's probably more than two, but yes, there's the, there's the what what is it, the uh, you know, the the the the Amazons and the Microsoft's and the Googles, that's only accounts for like what? Uh like 20% of the internet, I believe. And then the majority of it is just all, it's all just fraudulent sites. You go through the, you go through the the you go to Oz, right? Well, we have to remember too that the bots that everybody builds and as they become more agentic, they're gonna go to the darknet too. And and it's it's gonna be it's um you have to really have that governance in place because it's gonna pull all the good, bad, and the ugly and the evil out of everywhere.
Speaker 3:Yep.
Rob:Yeah. Right.
Speaker 1:Yeah. Because it's to your point. I mean, there's yeah, there's about 95% of bad internet, right? So we have tour sites uh that is really the gateway to uh the dark net. And and I, you know, I think there is a a a misconception. I, you know, most of what people think the internet really is, is great, right? Uh but to your point, Rob, uh, that's only the 5%, right? So the Googles, the Bings, um, Firefox, you know, these browsers, uh, they do allow access. Uh, but, you know, if anything gets into the wrong person's hands, right, it can be used for bad, bad intents or purposes, right? And so that's really what we have, right? Um, the governance should monitor uh organizations or help organizations monitor what they're seeing uh within the dark net. Uh and that could be sensitive information related to the organization itself. It could be individual users, it could be a plethora of things, right? Um, you name it, it's there. And so um I think that's this is why we have organizations that they do what we call um deep scraping, right? And so they're so they're scraping for sensitive details that pertain to either themselves or the organization in the dark net, and they can try to reclaim that or try to alter that information. Yeah, and then they you know try to create some garbra so that that doesn't happen to them.
Rob:And um and you do a lot of uh tabletop exercises, actually do them all here at Van Rien. Um how how does a tabletop exercise come in to identify you know potential exposures in cybertext with AI?
Speaker 1:Excellent question, Rob. So so what tabletops essentially do is uh explore the gaps, right, or any deficiency that your cyber resilience team may have. Uh this could be from a business standpoint, this could be from a technical, this could be from a business impact uh perspective. And so what the tabletops will do is put essentially the organization in the fire, right? To understand how would they respond under a live simulated attack. Now, some of the attacks could be a ransomware, it could be a data loss, it could be a social engineering, it could be a website compromise, right? Uh just to name a few. And to really understand how does the organization stand up uh to responding, to recover from that particular attack. Uh more specifically, ransomware attacks seem to kind of raise the ante a little bit more uh due to the sensitivity of a ransomware attack where the adversary will typically uh have uh sensitive data, right? They will typically have a sample of data and then they will request a ransom uh for payment to get that data back or provide that data back to the organization. And so it's a it's a very uh strategic process to learn where does the organization stand today. Um and to me, tabletops are quintessential in the cyberspace. Uh without actually understanding what we're capable uh of providing uh up against a ransomware or data loss attack, uh, we're basically oblivious uh to you know cyber as a whole and also we may not have the proper understanding as to what needs to be remediated.
Dawn:Yeah. Yeah. The the one the one question we're getting a lot of is we have customers that are developing their own AI. So this goes into a whole nother, they're not hooking in, you know, I mean, maybe they're using whatever on the back end. There's lots of lots of different different uh AI you can hook on the back end. But when they're in development stage, um I I think that there's a lot that can go wrong, obviously. A lot of things that they don't, you know, check, check mark, you know, this and that security, you know. But I don't know, I think it'd be helpful because because we have a lot of people doing this right now. Um maybe some some couple couple few items that would be to to look for. Obviously, we we gotta make sure the governance is a huge part of the government's governance, obviously. But is there some items when someone's developing an AI? Is there some some red flags, some some things to look for, I guess? Um, because I like I said, I have a lot of customers, we have a lot of customers that are doing this right now.
Speaker 1:Absolutely. So when we think about uh the usage of AI, we want to understand the risk management portion of that, right? Where is the risk management at? Uh, where is cybersecurity with when we talk about AI, the compliance, and also our internal audit? Um, some of the questions on top of that would be where is AI being used, right, within your organization? Yeah, uh, what data is it touching? What decisions will it influence? And then we need to understand the how, right? So, how do we integrate this into our risk assessments? Yeah, right. Uh when we talk about our vendor and third-party reviews, how is AI being leveraged, right? Are we allowing a tool to generate a response or are we going back to a human in the loop type of process? Uh, and same thing for our policy and control testing and our compliance monitoring. And again, going back to what we just alluded to with the tabletops, we would actually explore a lot of the gaps going through a tabletop as well. Uh, so those are some of the key areas that I would start first, right, as we adopt uh AI into our ecosystem.
Dawn:That's great. No, that's that's gonna be very helpful for our listeners because, like I said, this is hey, I'm gonna quickly, you know, create my own bot. Yeah, okay. Well, hang on. Let's get some guardrails up, you know.
Rob:Right. Um but they see Amazon do it. And uh was the Washington Post did it, right? I mean, there's everyone's doing it. And it's Q1. So there's a lot of changes in in the business landscape. And I I think every every leader executive board member is going, okay, or small business owners like, hey, how can I save 10% this quarter?
Dawn:Yeah.
Rob:And how's the bot gonna do it? But I think the uh the problem with the execs and the owners and all those, they're just gonna throw it in and think it fixes everything. And then wow, you have to train it. It's it's a child, it's a dog. I mean, you have to teach it what to do and how to do things. Yeah.
Dawn:And and I think that's and and I'm glad Bennie brought that up because when we talked to Dr. Howard, it was the human element. You've you've got to double check AI. Um, she even recommended, you know, checking AI against AI, you know, take Claude against GPT, you know, like ask it the same question, you will get different answers. Um, you know, um, you probably it'd probably similar, but I mean it's it's interesting. Um, everyone should try that. But the ultimate is you gotta, as a human, look at that and say, okay, does this make sense? And and you have to do your research and you can't just rely on it. Um, you know, obviously, you know, if you're on an enterprise level, you know, AI and you've trained it with your knowledge base, that's great. It knows about your company it knows about things, but um that's like a that's really good to do. But asking it questions about, you know, a legal question or something with a law, that's definitely something that you need to need to separately research. And I think that's really important. That human element is is is really needed needed still um in this. So I'm glad that you brought that up earlier.
unknown:Yep.
Speaker 1:And just to just to piggyback one one more item is the prompt engineering or the prompt come out of you know requesting what you're trying to get out of these AI platforms. Yeah. I think where a lot of organizations may may be a little lean in their understanding is it's all about the prompt. What are you asking this AI tool to do?
Speaker 3:Yep.
Speaker 1:And how precise you ask it will determine the output. And so I think that's another tangible element that has to be explored.
Dawn:Oh yeah. How you ask it. Yeah that's that's the thing is how do you how do you prompt it? Do you you know say, you know, look I'm I'm looking for the I mean are you having a conversation with it? Like how are you asking it? That is a very good point because you will get different outcomes. Absolutely that's that's really important. Um yeah how you talk to and be nice is what you're saying.
Speaker 1:Believe it or not uh these models uh and Rob you talked about this before the agenic models they they are learning right they're learning at a at a rapid rate where their responses are almost like well we already know this right we already know this about you so we're going to fill in the blanks right we know that you like Kool-Aid or we we know that you like to eat this on this particular day. So it's going to assume that and move on to the next question right and so it's able to create a hypothesis as to your next outcome right and that's really the the the the learning aspect of these models.
Dawn:Yep. Yeah yep you you can make it your friend right you can make your friend and agree with me I mean you you could it it's kind of scary out there. You could totally make it your friend I know grok is uh is one of those that you can make it your friend and you can uh you know it's like hey dude you know you can change the the the uh the way the the person is you know who who that your person is and what kind of you know how they're talking with you and it can be your friend it can be your imaginary friend right yes it you know so there's that but then there's also I need you to tell me this there's there so it's kind of like you're my friend but now I need you to be like give me an honest true factual answer. So so I think that that this is really important um is just a piggyback is just the way you ask the way you ask the AI whichever AI you're using. You know, is it conversational be my friend or is it I need you to tell me something about this law there's a difference. You're right.
Rob:Right so now the the audits you performed in AI what are like the top couple like items that you always see clients you know make a mistake on and and um risk their, you know, risk not only um their company but also if they're in the healthcare space, risk getting yeah their federal uh requirements for HIPAA compliance if they have a SOC, an ISO, a high trust GDPR, all of that what are kind of the couple, you know Bennie's top 10 or top five lists of yeah you better fix it. This is what I see.
Speaker 1:Yeah so from what I can see is the the incident response plan right that's always probably the common denominator right if we don't have a a thorough incident response plan how do we understand our cyber related incidents uh how do we recover uh how do we respond right are we resilient uh another component is the disaster recovery plans right your business continuity disaster recovery plans uh which works in conjunction with the incident response plan the third item will probably be the communications plan how do we communicate uh through a live you know separate related incident or you know we lost data uh and then obviously uh four through ten could be AI related right um there is an infusion social engineering we talked about deepfakes uh there's uh there's also quishing attacks uh we we the risk really is understanding how our organization can stand out right uh to these cyber related attacks so I think it's one thing to kind of speak about it but if it's not documented it doesn't count and so I think a lot of the organizations uh that I've seen in and out uh may not have a granular approach meaning uh they don't have cyber related playbooks that could assist them in responding to the attack um you know when we think about cyber related attacks we're also thinking about reputational impact business impact technical impact right all of these impacts could do two or three things it could put us in a bad place right where we're getting sued uh or uh we have reputational loss or impacts right on both sides so I think to answer that question those will probably be the top three and then again four through ten AI you know AI related attacks or an infusion of uh I call these combination attacks where you may have a ransom and data loss or you may have social engineering and a theft all kind of happening all at the same time.
Dawn:That's great to know I think people are going oh at this point when they listen to this.
Rob:And it's the everybody's selling everything about AI. So all the say the third and the fourth party um you know your vendors that you work with you know uh do you want them to be a vendor or a strategic partner because everything is plugged in. I mean any any clients I speak with and obviously you do and same a dawn they're like oh my HR platform is going to roll out AI oh my marketing platform is going to write out roll out AI oh my you know my EMR or my you know I've got co-pilot the marketing world accounting software. You know and you're like okay where is that AI taking the accounting data or the health data? Oh I don't know.
Dawn:Oh okay well you need to ask other AI policies and governance so that's one thing for for listeners is you you got to ask where the data is going not just go ooh it makes a pretty spreadsheet and my taxes look better or well yeah and our tax professional um let us know that they're using AI too a little bit and that's you know when you think about AI and financials and taxes and just accounting and stuff like that, you know, it can be great for forecasting you know definitely but you've got to be careful and you've got to be careful with what that plugin is and that platform is. So um you know this is you know we press upon a lot the third party you know supplier risk register is you have to know where your data is going. And then so when Bennie you come in and and and do the scope and like okay here's we're gonna audit um you know you know you've got an idea okay oh it looks like you're using wow you're using like 10 different AI bots so um this is yeah this is it's it's good good to have that to to understand it that for scoping for you when you're doing an audit. So yep.
Rob:Yeah what and what is your you know kind of what's your best advice uh Bennie to the listeners for uh compliance audits and AI as you move into 26 and we start to focus forward this year. What can you what kind of negative knowledge?
Speaker 1:I would just say um kind of be more aware um you know AI has been obviously talk of town but I I I want individuals to be more invested into the AI platforms of the tool sets um learn how the tool can actually improve your workload um understand why are we using the AI tool uh uh is this tool approved in our environment so there's gonna be a lot of vetting um uh obviously pricing or budgeting will also be another top priority these tools do come uh with a nice cost as well and so we have to make sure it makes sense in our environment I do think uh we do see a lot of organizations that are trigger happy right they're buying tools uh without really understanding is this going to be beneficial in the long run. Um and then obviously there's there's several risks right so we have data leakage uh you know when we use these public tools um third party AI exposures uh prompt and model manipulation right so just to name a few but uh again if we don't have a a streamlined understanding of the tool why we are using this tool um what issue is this going to resolve for us um we begin to get trigger happy and buy a plethora of tools that may not have uh the value that we're hoping to get out of it yeah yeah yeah that's definitely true it and and start with why you know why do we need this why do I need Aomi HR why do we need in my accounting you know why do I need it in in databases you know um you know Microsoft is pushing same like Google you know Gemini it is everywhere they want copilot and everything every in your pocket to your watch to whatever it's like is it going to help me write this or help me with spreadsheet or as the commercial says it's gonna help me pick the best linebacker right stuff like that.
Rob:So um yeah it and and and wall it off you know tell it what to do and don't don't we also have to remember when you enable it it always is going to share its data your data with its LOM it's gonna it's gonna suck as much as it can out of the environment. So make sure to turn that off before and create your own. Well good stuff Bennie as as uh always you know we always appreciate your hard work here at Van Rien you know I know our clients do and your wealth of knowledge and your wallet certifications and everything that you bring uh and your professionalism and really appreciate that. Uh what's your last little bit of advice as we wrap up here and the Van Rien podcast.
Speaker 1:I guess I could just say uh AI is quickly becoming part of everyday operations uh but governance determines whether it becomes a competitive advantage uh or a compliance and security risk. Uh organizations that embed AI governance into their audit and risk programs will be the ones operating with confidence tomorrow. So that that would be the you know the tippet that I would just leave everybody with. This is nothing new uh but certainly you know dive in head first right uh be a hands-on leader to understand how AI can benefit and also understand the risks that come with it yeah that's great perfect perfect way to end it yep well thank you Bennie for joining us on the podcast this week always a pleasure thank you for all you do for our clients and uh until next time appreciate it thanks Bennie yep thank you