VanRein Compliance Podcast

AI: Beyond Policies and Governance with Dr. Camille Howard

Rob & Dawn Van Buskirk

Send a text

Thank You for Listening to the VRC Podcast!
Visit us at VanRein Compliance
You can Book a 15min Call with a Guide
Follow us on LinkedIn
Follow us on X
Follow us on Facebook


Rob:

Welcome to the Van Rien Compliance Podcast with Rob and Dawn. We help growing teams reduce risks, build trust, and stay audit ready without the overwhelm.

Dawn:

We have a very special guest joining us because we are so invested in your success. We simply couldn't pass up the opportunity to talk with our guest who knows AI and has a remarkable ability to explain this in a very people-centered way.

Rob:

This week on the podcast, Dr. Camille Howard will be joining us, and she holds multiple certifications and brings nearly two decades of experience in the areas of emerging technology, ethics, and human resources, and works right where those three intersect.

Dawn:

That's correct, Rob. Dr. Howard works with every level organization from startup to Fortune 500 companies. She's the president and founder of Humanistic Power LLC, specializing in helping businesses and organizations improve HR practices, ethics, and compliance. She is the Regional Ethics and Compliance Officer for a global multinational corporation. And she is also the member of the National Conference for the National Society of Compliance Professionals.

Rob:

Joining us this week is Dr. Camille Howard, and we're very excited to have you join us. It's a pleasure having you join our podcast. And thank you for taking the time to put this all together and work with our team. We appreciate you being here.

Speaker:

Thank you so much for having me. I'm delighted to be here today. Yep.

Rob:

Thank you. This week we're focused on, of course, AI. And of course, when we record this, it is post-Super Bowl. And as we know, everything in the Super Bowl was AI. I I don't even know if the playing was real. I don't know.

Speaker:

Absolutely. Absolutely.

Rob:

But I I think it's I think we feel like it's all going to solve everything, but we're really focused on that human piece of the AI. And as the AI has become more agentic, here we go. You know, how do we have the governance in there? So, you know, I think I still get a lot of questions, so does Dawn, about how is AI going to impact the work? You know, what does it look like for everything from computer coders all the way heck to landscapers, if you will, right? What does that look like in your world?

Speaker:

Well, we're already seeing tremendous impact to the workforce as it relates to AI in in all fields, all disciplines. So I think that everyone is a bit concerned. Everyone that's not a technology geek is a bit concerned about the impact of AI, quite honestly. We saw that just a couple of weeks ago, Amazon laid off another 16,000 employees. And their their philosophy is that, you know, our AI is getting better. We're able to streamline our workforce, and that's why we're doing this. And so everyone is concerned because those are normally the stories that you hear. The stories that you don't often hear are about how AI is can help the workforce and help you in your productivity. And I think that if organizations are really focused on wanting AI to be effective in their organizations, then they have to show how it's a benefit to employees. And it really can be, it really can be another assistant for you. It's not necessarily anything that's going to replace you. And that's not to say that there are roles that won't be, will be replaced. It's, it's just, but it can really be used as a great tool from a productivity perspective in any discipline that you can think of. So if we can get people out of the fear-mongering stage, we'll be in a much better place to focus on the productivity piece. I cannot say, I cannot think of any industry where it won't impact, where AI won't be an impact. You know, you think about roles like welders or electricians or tradesmen and things of that nature and individuals who are specialized in that particular discipline, there's still a role for AI there. I won't ever say that AI could replace an electrician or a welder, but there is an opportunity for AI to work hand in hand with almost any discipline that you can think of. So I think that people just have to get used to that. And we and I think that the industry has to do a better job in promoting the positive aspects of AI from a human perspective rather than the replacement theory. Yeah, I agree.

Dawn:

It it we need to share. Yes, share the space with AI. We need to play nice, like in the sandbox.

Speaker 4:

And um and we're well.

Dawn:

That's a great way to put it. Yeah, we hear that from customers. They are scared that it's gonna put them out of business. And and it's like, nope, you just you need to learn how to work together and balance out because AI is such a powerful tool that can help. It can help you work smarter and not harder. And and I I like that you said that, that it could be in any industry across the board. Yes, we need to take the fear. People are very fearful still. Is there some other things that you try to try to ease people's minds by using AI?

Speaker:

Yes, I really try to focus on the as I I'm I'm an eternal optimist by nature. So I'm always trying to focus on the positive of any technology. And I will say that everyone is in a learning phase right now. There's certain people that are further along in the learning process than others, but the majority of employee populations are not at all comfortable with AI. They've been thrown into Copilot, for example, as a tool. And their organization has said, use this. This is gonna help you. But there hasn't been a lot of focus on allowing. I like that statement that you made about the sandbox, because I think that if more organizations do allow people to play in a sandbox, they'll get a lot more comfortable with it. And that's one of the things I really try to promote with organizations. Give your employees an opportunity to play. Don't just put, don't just place that tool in their hands and say, you need to utilize this to make your role better. Because if you're fearful of it, you're automatically going to not use that particular, that particular tool. So that's not a way to increase productivity. What ends up happening is that people just start working around that tool and not actually utilizing the tool, which is not what it's meant to be. So I always promote letting employees play and have that experience, give them opportunities to voice their their concern as well. That's another area I see organizations kind of failing in, wanting everyone to be the same place that their technologists are in terms of acceptance. And it's almost the five stages of grief that you have to go through when you think about it.

Rob:

I think we've all been there in some sense.

Speaker:

Yes. It's no different than when we went from type typewriters to computers. It wasn't overnight, but the AI as a technology is moving so quickly that people expect for everyone to be on board at the same time. And that's just now how not how it's up how it's gonna operate. I think about, for example, even our education system, they're struggling to keep up with teaching students on what they need to know about AI, but they're wanting students to utilize AI, but they're scared about them just you utilizing it for cheating. And I see that unlike moving from typewriters to computers, for example, there are classes in school when we were all a lot younger. There are classes you could take data. Speak yourself. You could take that speak for me. You could take typing and things of that. You had time to ramp up to utilize that tool. And you and we just don't have that space within AI currently.

Dawn:

Yeah, you make a good point. It's it just changed a minute ago and it's changing now again. And I think that is the one thing that I realize when I get into whichever AI we're using to chat or Gemini, is it changes dramatically. And if you hadn't been in it for like a month, oh goodness, it is it is unreal. The the the thing that it learns too. It learns you. It learns your attitude and your your just how you speak to it. Um, so it is, it is, it can be scary. It can be, but if we can learn how, I love that, train people to utilize, you know, to to utilize it and play with it. Yes and like a playground. And I think that is really that's and we're big on education at Van Rien. We we do a lot of training videos and and on a lot of things. And that that's a huge piece of it, is is train your employees how to just go and play and what the functionality is of that AI bot or whatever you're using, and and play around and ask questions and and and make mistakes and break it. I mean, if you will, you know. So I I think that's really important because I think there is a we see that fear of, you know, or they want to do, they think it does everything for them. There's that side of it too, right?

Speaker:

So I I guess that's a whole nother piece of our conversation is that's a whole different that we can delve into that. I I call that AI exceptionalism. That's when an organization just takes the AI's AI's output for as gospel. And it's so easy for you to to delineate your author or to just give up your authority to the AI because you're like, I didn't think of that answer. That must be right. But yeah, yes, in actuality, it's a tool, it's a software just like everything else. It's been trained on the internet, it has an ability to sound a lot smarter than you, but it's not always right. And and people have to be comfortable with that aspect of it as well. And I think that a lot of organizations lean into AI exceptionalism and and and think of it as the gospel, when in actuality, it's the human that's in the loop that really is the authority and should be actively involved in that output. Right.

Dawn:

Yeah, and I think that's that's one thing is we can, I mean, how do you tell people how to check up on it? Like what are you know, we used to Google everything, right?

Speaker:

Yes.

Dawn:

And so it's like now, where do you go to QA? Are you going to multiple AI um places, you know, multiple uh LLMs? Are you you know, how do you how are we checking? I guess, and that's us as humans, how are we checking this? Like, I I do you have some like tips and tricks and you know, scenarios where to do this?

Speaker:

I mean, it's yes, there are to that's a really interesting point. And there are many different ways that you can follow up on the AI or check the AI. So it really depends on what you're using. If you're using a public model like Chat GPT or Gemini or some other tool like that, then it's been trained off the internet. So whatever you whatever's out there, you but it's been trained off of not just the goodness of the internet, it's been trained about on every aspect of the internet, the good and the bad. So you often get a lot of hallucinations. I've seen over the course of the year it's gotten a lot better with that. But the one thing I can see with tools like ChatGPI, ChatGPT, and and Gemini, and the more you utilize that tool and the more people, and now there are billions of people that are utilizing the tool, it's eroding a bit. Because remember that an AI is trained at a point in time. And at that point in time, it it knows information from that point in time forward. Or, you know, it's it's it's very or backwards, that point in time backwards. But the more people that are utilizing AI, what we're seeing, what we're seeing in the outputs now, they're getting even more. There's more hallucinations, even though it's gotten better at hallucinations, but you see even more because there's more information and more people providing the additional additional information into the tool. So it's inferring more. And so that's a that's a bit of a challenge. So to answer your question about how do you check on it, one of the things I do is I compare several different models depending on what I'm I'm looking for or what type of information I'm looking for. I will compare several different models to see what the output is. I am always defaulting to the knowledge that I know as well, even though you utilize it daily. Don't forget that you are the expert. I mean, of course, you can learn additional aspects of it, but you are the expert in most of the things that you're you're following up on. If you're using an internal model, a model that's been designed internally, then you have to ensure that the policies, the procedures, the standard work, the whatever tools that you're utilizing to train that model, that they're factual and they're accurate. And so only your organization knows that. And that's why it's very important for whatever garbage in, garbage out. So whatever you're putting in that tool, even if you think you're going to be safe by building your own model, that you are that those standard works and policy and procedures are accurate to where your organization is now. Because at the end of the day, whatever you put out there, it's basic it's it's going to impact your reputation. Your customers are depending on that, your employees are depending on that. You just want to make sure that you are doing a robust job and training a model if that's the direction that you choose to take. Yeah. Great point.

Rob:

Also, there's also the legal portion of it. You know, I you're as a business owner yourself, you think of that, and I think of that. And as a compliance and audit firm, you know, we built our own models, we're using them, we're educating them and we're getting them dialed in. Um, and you have to continually think about that. So that's where those AI policies, procedures, and education really take take hold. And that's what we've we've created here at Van Rien. And I think you've done the same.

Speaker:

Yes.

Rob:

Because people don't they don't know what they don't know. And then at the end of the day, the AI is not going to be on the stand to say, hey, your bot said that I was non-compliant and I lost a you know million-dollar contract. What do you what says you, Rob? He says, Well, I can't point to the LM. It's going to be Rob and Dawn there. Well, it'll even be Dawn on like that.

Speaker:

That's a very important point. You bring up a really great point, and that's another point that a lot of people we negate to think about, I think, in this current environment. And it doesn't matter what administration is in the White House, for example, the laws and policies, the laws and regulations are still there. Everything is still on the books. So whether they're being enforced now or not, you're still accountable for those laws and regulations as it relates to your business. And organizations need to think about that when they are when they are relying on an output from an AI there, whether it be GDPR, whether it be SOX or whatever the regulations are that are relevant to your industry, they're still on the books and you're still accountable for those. So I wouldn't risk it or chance it that they're not enforcing that right now. So we can do A, B, and C. No, you still have an ethical obligation and a legal obligation to follow those laws and regulations.

Rob:

And as anybody knows, in our country, anybody can see anybody for anything. Absolutely. So you gotta be gotta be ready for that. Now, you do a lot of work, obviously, with in the HR space and the ethics piece. And I'm I'm curious because I've Dawn I we get different different answers from kind of like the middle management down to like the frontline workers versus the executives. I was here the executives, AI is gonna fix everything. Here's co-pilot, or Gemini, there's Claude, whatever. Is it done now? To the middle management going, great, another tool that an exec saw down to the worker bees, like, what do I do with this? What you know, what do you see with your clients that you work with and how do you coach them? Is it different from the executive level down to the you know to the frontline workers?

Speaker:

It is different from the executive level to um frontline workers. With the executive level, the one thing I I try to always remind them is that ultimately at the end of the day, you are responsible of the outcome of your business. You're responsible to the board, you're responsible to your shareholders, you're responsible at the end of the day. Do you want to risk your, do you want to risk because it can be gone in a second on one bad, one bad tool that's implemented that we didn't that you didn't follow up on? Do you want to, do you want to risk your company's reputation on this because it's the hot thing to do? AI is the hot thing to do, or do you want to put those AI governance policies in place that are going to help protect your business? It's not gonna mitigate 100% of the risk for your organization, but you'll be in a much better position than if you go the route of go fast and break it, which is what some organizations are choosing to do.

Rob:

Well, that is America.

Speaker:

That is America, exactly, exactly.

unknown:

Yes.

Rob:

We break a lot of things.

Speaker:

Absolutely. And with middle managers, I think that normally my focus with middle managers is how to help bring their employees along in terms of improving the overall productivity and productivity and efficiency of that particular department. And when you are, and that means that you have to be intellectually, intellectually curious, not only as a middle manager, but also as a frontline employee. I am an advocate for continuous learning. And I'm not saying that you have to go get an AI, a certification in AI, or you have to get some type of engineering degree, or even you have to be a technologist. You don't listen to some podcasts. Just learn about learn the language of AI and understand what's happening behind the scenes, the basics. Read a couple of books. My um, you know, there's there's there are you have to be intellectually curious because there now is not the time to be a hermit, quite honestly. And I tell frontline employees this as well. You are not going to escape this.

Speaker 1:

Yeah.

Speaker:

This is not going to go away. So you have to figure out how to make it work for you and how it works for you. It's different for everyone, it for every person, but there's not anyone that is not going to impact. And so you have to be curious about either you want to be a part of this or you don't want to be a part of this, but you risk being obsolete if you aren't a part of it. And that and that's a scary thing to think about for a lot of frontline employees. And they're like, look, I'm just the customer service person. I'm just trying to make sure this person is getting their tire that they ordered and on the car. Or on the car, exactly. But there's a role for you as well. And I don't, and I I'm always an advocate for those who don't have necessarily a voice to speak. And so think about that in this time. There's a lot of for frontline employees, it's not just the the business relationship, personal relationships as well. Most frontline employees have families, they have kids. They're trying to figure out what does it look like for my kid that's either going to school or whether they're college bound or not. What is this technology going to do for them? If I'm worried about my job becoming obsolete, what about them? So there are a lot of emotional and psychological factors that are happening with employees. And it seems to impact frontline employees and even middle middle managers more than anyone else. That's what I've been seeing.

Rob:

Yeah. And is that where, you know, we we we briefly touched on pause authority earlier? How does, you know, anytime you hear pause, you're gonna have different, different, different reactions. You know, the front lines go, great, finally. And the exact go, what do you mean, pause? I gotta break it. And you've you've worked globally. I know through Europe we have clients in Europe, and you probably do as well. You start with a policy and how do we govern it? How do we keep the how do we keep the bot in the box? In America, like we're gonna jump out of the plane and build the parachute before we hit hit the hit the ground. And you need both. You need the innovation to push, but you need the governance and the and and the framework. Where do you see like the pause authority? Where does it come in an organization at at different levels?

Speaker:

So pause authority for me is really about a culture shift. The when people think about pause authority, it's not a kill switch, it's not a committee. PAS Authority is really having a practice in your culture of if someone sees that an output is incorrect, that they have the authority and the voice to say something. Because you think about this, everyone's utilizing these tools, everyone's getting output. I'll give you a good example. Maybe you have a tool that is determining benefits for someone, and and and this is you know, for your constituents, and it's determining benefits. You can think of any kind of benefits, whether it's social benefits or anything. And that tool is the outcome, the output, the people that are receiving the output, your employees, they're like, this looks right, but it's a little bit off. I feel like it's a little bit off. I've been doing this for two years, I've been doing this for five years. I know what this output was, I know what the input was, and I know how the output is supposed to be. Pause authority is the ability for your employees to say something and for your employee, for your employees, whether it's to slow down that particular tool or to you need to alert the data science team that there seems to be something off with the model. These outputs aren't what they are supposed to be. That's what pause authority is. It's really a culture shift. It creates an organization from going from why didn't somebody say something sooner, which is what leadership's gonna say after everything has been, whatever the reputational impact is, it's the ability for for you to shift from why didn't somebody say something sooner to, hey, let's pause this for a second and look at why this model is drifting. Something's not right. It's a culture shift. And I think that if you give the, if you give your employees a voice and empowerment through this process and pause authority is a great simple way to do so, then you're more inclined to, they're more inclined to speak up when they see something's wrong because all of this output that's happening with AI, somebody's receiving this output. So someone has to have the ability to say, hey, this doesn't look right. And they need to know who to say it to, how to say it to that person, and there needs to be an action behind it. So if you're not an organization that traditionally has a speak up culture, this could be difficult.

Rob:

Ooh, a speak up cult. That's good. It gets me to think about like the compliance hotline. We've all heard that a hundred times.

Speaker:

Absolutely. Yeah. And absolutely.

Rob:

See something, say something, right?

Speaker:

We hear that from our it's the same thing. And the other piece of that is when we talk about the human in the loop, the that that is the ultimate human in the loop. The loop because you have have to remember there is an output to all of this. And someone has to be able and comfortable with saying something when they see something that's wrong. It can't be that there's a workaround for oh, that that that letter was wrong, that benefits determination letter was wrong. It only said it only focused on this particular group and this zip code. They were awarded benefits, but this zip code wasn't something's off. That's the ultimate, that's the ultimate level of accountability for any organization and a great way to get everyone involved.

Rob:

So so we're gonna tell all our clients we're going to from a pause hotline from a compliance hotline because that's when everybody rolls their head. We say compliance hotline. Yes. I and you you've mentioned medical claims multiple times. And we know we know the medical industry, healthcare is needs AI. We know it is a it is the target. Financials too. Those because those are the two largest industries in the U.S. Um, and I was thinking kind of circle to you, to Don. Dawn used to do claims adjusting before we started Van Rien compliance. And I was just thinking what you saw, Dr. Camille, what you just said, Dr. Camille, about the the bots, you know, figuring out who gets what claims. I mean, Don, what's your thought on that? Think about when you were doing claims.

Dawn:

It used to be, well, and to have AI to do this, you know, when I did this 20 years ago, a while ago.

Rob:

No, it has been a while.

Dawn:

We used to have the old school way is if you lost a a limb or something, I mean, God forbid, in an accident, there was a dollar amount for it. And and you have to imagine now running it through some of these AI models of what if and this and that. That's what was happening manually. I mean, attorneys were looking at this, going, How do we calculate the pain and suffering of this person? Well, let's see, an arm is a thousand dollars, a finger, you know, and I mean it was it was not, it was a very human involved. And I think that you could you could turn it now and say, well, AI is gonna help with that, of course, and do some projections right. But you have to have that human element in there too, because it's like you have to say, okay, well, the bot's telling me that that that the arm is $2,000 and not a thousand. Like what, you know, and that's kind of where that comes in. So again, in this this example, it would have been great to have that to understand what a claim was worth. But then you also have to be very careful because what if it's saying it's worth more or less, or you've got to have that knowledge of that attorney of being able to say, I've looked at thousands of claims and this is the average, you know, and and and just bringing that human element there. So it can be, again, a great tool, but you have to take a step back and look at what the output is and compare it and and and put some QA to it, right? Question it. So yeah, it the healthcare space, the claims and all that, that is that is a little bit scary when you think about it, because that's a lot, a lot of information, a lot of demographics, a lot of, I mean, just a lot of data going into it.

Speaker:

Absolutely. Absolutely. And that's why it's it's it's and I would even I even recommend with organizations when they're designing their LLMs, if they're if they're not buying something off the shelf, actually designing it, which is which seems to be a popular model for um healthcare, the healthcare industry. Have those individuals like yourself, Don, that are that had been doing this for years and you know what it they need to be involved. You have to get these experts, these subject matter experts involved in the design. Don't just figure out, don't just consider that the AI is going to figure it out for you. They can give you a mathematical equation about it, but that needs to be informed by something. And so to your point, Don, it's a great tool. It could help in being able to explain why this amount versus that amount, the tool can be very succinct from that perspective. Yeah. But you have to have it, you have to design that in the front end in order for it to be effective.

Dawn:

Yep. Yep. It's that what you said, the garbage in, garbage out. You've got to, you've got to teach it. We, we, we are teaching our our um our AI, we're we're giving it a knowledge base. This is our expertise, this is what we know. But it's okay to go and say, hey, refine this sentence for me. Refine this, you know, what's another what's another word for, you know, instead of looking up at the dictionary, what's a synonym for whatever, you know, we're actually using a tool, an electronic tool to help us versus going into a dictionary, you know, the old school, you know, open up to see what the word is. So and you can never forgive that.

Speaker:

Yeah. That's a very important point. Go and ask any AI what today's date is. Some will give you a good uh today's date, but some won't. So it's not as smart as you think it is. It's not as smart as you think it is. So just make sure that you are in the loop and and you don't default to to the AI. That's the most important piece.

Dawn:

Right. And I like how you you use different AIs. And I I do that. I'll have Gemini and Chat GBT open and I'll ask it the same thing, like refine this sentence. And they're two totally different. Some of them are, you know, speak more my language, and some of them are like a little kind of, you know, this way or that way. And I don't really like it, but it's interesting. And yes, you'll get different answers if you ask a question. So I think that's a very important point. I think that ties into the, you know, just holding them accountable and and making sure you're getting the right information, you know, out. So yeah.

Speaker:

Another tip I'll just share with with with your listeners that is really helpful as well. AIs are designed to be very sycophantic. They're gonna tell you exactly what you want to hear. So one tip that I can tell you, I mean, because you know, who doesn't want to hear exactly what they want to hear? Yes. I want this double cheeseburger. I know. I'm like, I know.

Rob:

My wife doesn't want me to have that. My wife doesn't want me to have that.

Speaker:

One tip that I can tell people about when they're utilizing this is with any AI, you have to have a really good prompt. But who is an expert in writing prompts? No one isn't expert in writing prompts. So you always start off any prompt with improve this prompt and then say whatever it is that you want to say. That is going to help you in terms of getting a really good prompt. And so it'll help you think about terms of that issue that you are prompting in ways that you wouldn't think of. That's one tip. The other tip I would say about, I always tell my AI, I start off with this, and you only have to say this once. From now on, I want you to be direct, honest. You do be brutally honest. That's what I want. Brutal honesty. Don't worry about being sycophanti. I don't need all the of the fluff and all of that. Just tell me the answer that I want that, and that I want, I want to know. I just answer the question that I'm answering that I'm asking you. You can set your persona up in your AI that particular way so that it's always giving you honest answers. And then when you get those answers, another thing you can also do is when I when they when it gives you an out output an output, maybe you're doing a strategy session or something, it gives you an output. Now tell it to break it. How give me ways to think about this differently. So it can be incredibly helpful from that perspective, but you have to play with it and massage it in a way that works for you and what you're looking for. That's a great point.

Rob:

Well, now I just want the cheeseburger. But you're exactly right. It's how do we you gotta set those frame frameworks up?

Speaker:

Yes.

Rob:

And the um the accountability piece is is big. You know, I I because we touched on the legal piece earlier, but how do you, and we've talked and we've talked about the pause authority, which is great, but how do you make sure organizations are accountable? You do you do, you know, we're starting to do more work with legal. Very I knew this was gonna happen, so did you, and we've got law firms as clients, and we're seeing AI come in up the paralegal work, which if you think about it, the the digging through the cases and putting stuff together, that's tedious work and allows paralegals and attorneys to actually focus on on the cases. But how do you structure that accountability talk with with your clients and everyone?

Speaker:

Yes, well, it it starts off with having a very strong AI governance foundation. That needs to be a key tenant of your of your governance strategy that you that your organization is going to be accountable. Everyone needs to know that going in. You can like be so I don't care what the tool tells you. If the two if you utilize that tool for a brief or something of that nature and it's wrong, that's on you. It's not on the AI, it's never going to be on the AI. And that's another thing organizations have to realize. Open AI is never going to be sitting in court because their AI said something that was incorrect. It's always going to be on you. And that's you as an organization. So, and that's a very risky premise to think about as an organization. That's why governance is so important. And when you're thinking about your governance strategy, it's not just about the policies, it's not about how you train employees, it's not just about those processes that you have. It's about the complete holistic picture of how you're utilizing AI. And does everyone in your organization know no matter what the AI says, you're responsible? You ultimately have the big R. It's it's not the AI. And that is a message that you have to continue to promote throughout your organization. Just like you promote if you have an AI that is not aligned with your vision and your values as an organization, you have to fix that. It's not the AI's responsibility to do that. So you can't lose you to you can't lose who you are as an organization to the AI. You can't default to the to the AI. So I think it's very, I always counsel people that it's very critically important that you know, no matter what output it is, wherever it is in your organization, you're responsible. None of these or Meta's not, Google's not going to be. It's all on you. So just think about that.

Rob:

It it's, you know, when we audit, you know, GCP or or Amazon or any or Microsoft, those platforms, those are all legally constituted as conduits. They're just a platform. So legally, you're not going to sue Microsoft because you had a breach. They're going to go, hey, have fun with that. We have we have the money and the attorneys.

Speaker:

That's true.

Rob:

But making sure that it is on you and and Dawn and I were talking about this earlier, well, later last week, was we've seen when we asked for evidence, we've seen clients just throw a policy like here's our password policy or a two-factor policy. Or now we do audit against governance. What is your what is your HR and legal teams and leadership talk about governance? And you can see that it's clearly been written by you know a GBT, a Gemini, a co-pilot. Because you could you could tell. Oh, I hope you're having a great day. You're like, no, I don't want the day. And that's when we as auditors have to push back and go, you know, go go think about is this aligned with your values and everything in your business? That's really good. Yeah, definitely good work. Very focused work.

Dawn:

Yeah, I like that. I like checking up on it because people think that what they're getting out of AI is the is the gospel. And it's yeah, it's only as good as what it's it's it's eating up, right? It's taking in. And who knows, and it's taking in, there's there's the good, the good web and the bad web, too. That's it. There's the dark web.

Speaker:

So yeah, and and the the other piece of the just as I remember that it's persona is to be sycophantic. It's going to tell you, it's going to present to you exactly what you think it wants to present. Yeah. It's going to try to please you, but you still, you still have to be accountable for it. And I cannot, I cannot overemphasize that enough. That it's it's it can help you in terms of taking your your process from or that policy. It may take you from four hours to designing it to one hour to designing it. But I always utilize it in order to give me a framework. I never utilize it for the language. Yeah. It can help you to done to your point, it can help you improve a sentence, but you never but the thought has to be based on you and your organization and what your business out, the expectations of your business outcome. And so if you think about it from that perspective, then when you're building this policy that you're helping AI is helping you with, it's just assisting you. It's not building it. So the relationship change shifts from that perspective. Yeah. And people just have to be comfortable with that and mindful of that. Right. Right.

Rob:

And what do you see, you know, I always I always like to I always like to start with the why of everything, right? Like why do you why do like we're working on a policy bot now, a GBT. And we're like, okay, why do we want that? Well, we want to re reduce the back and the fort. We want to, we want to uh systematically prioritize and clean up all the policies, make sure they're even bounce them against legal references, and then obviously have the legal team through that as well. So I always like I always have custom customers like start with the why. Why do you why do you need this AI? What is it gonna do for it? Is there something like that or something that you like to have ask your customer clients? What do you start with when they say?

Speaker:

I start with the why as well. Yeah, absolutely. Yes, absolutely. I start with the why as well. I always want to know in my initial conversation with with a client, I want to know, why are you even talking to me today? What is it about? No, seriously, I don't because I need to make sure that we're aligned in what I can give to you and what you're expecting. It's the same with AI. Why don't just use it because it's the hot thing to do today? It needs to, you need to have a reason that it's going to either positively impact your business, it's going to make you, it's going to enhance your revenue. What, what, why are you utilizing this tool? Is it because, oh, I've heard about it and I think it would be great, but is it really going to give you the outcome that you're looking for? And that is really most of the time that I spend initially with a client is on the why. I I don't just take an answer of, oh, because you know, my competitor has it, I think I should have it. No, you need to dig deeper than that. There has to be more substance than that. Yeah. That's a very important piece. So I start off with the why. And once we understand and have a really succinct understanding of the why, then we go to who is it going to impact and how is it going to impact that that? Is it whether it be your stakeholders internally or externally? How is it going to impact your stakeholders? And you kind of work your way down through that aspect because my goal is always to never, never abdicate to just the AI. I'm always abdicating to the human and keeping that human at the forefront of the discussion, even when we're talking about a technology. And so that for some most organizations, it takes a little work to get there, but it really helps to level set and strengthen an organization's foundation when they're utilizing this type of technology. Yeah.

Speaker 1:

Yeah. Yeah.

Speaker:

Yep.

Rob:

No, that's that's very good. That's very just really focused and really like what you're what you're saying there and how we're how we're doing that. You know, I I know we always love to give the floor, obviously, to any of our guests here at the Van Rien Compliance Podcast. And you've had some fascinating nuggets, uh, Dr. Camille. And I know looking at your resume and everything, you can bring a lot of value to our clients. So where can our uh listeners find you? How can they connect with you?

Speaker:

Sure. I am most active on LinkedIn. You can find me on LinkedIn under Dr. Camille Howard. I'm always interacting with um people that I'm engaged with on LinkedIn. Um, I will tell anyone that's in the compliance space or even in the HR space, there's, and this is no plug, I don't get anything for this plug, but there's an organization called Claire, and it's the community, it's compliance leaders for AI responsibility. It's a great forum. They're available, they're out on LinkedIn as well. It's a great forum. Once a month, we host a just an open session for anyone that wants to attend where we have these honest conversations. It's not recorded. You can have honest conversations with other AI and compliance experts to talk about what's happening in your organization, maybe to get some assistance or just to brainstorm on things that you're seeing or things that are happening. That's a great form. But you can always find me on on LinkedIn. I also have a Substack page as well that's connected to my my LinkedIn as well. And of course, our website, human humanisticpowerlc.com. That's a great way to reach reach me there. But I am always opening to engaging with with individuals that want to learn more and are humanists like myself.

Rob:

Humanists, wow, human. We're all these efforts to have no human now, like, oh, I gotta be human still.

Speaker:

You gotta keep the humans. We gotta we have to stay relevant. Yes. Can't all just be yes, yeah, yes, it's very important.

Rob:

Wonderful. Well, it is it has been a pleasure to have you here.

Speaker:

Absolutely. I've so enjoyed each of you. Thank you so much for having me. Yeah, thank you so much.

Rob:

Very welcome.

Speaker:

Thank you.

Rob:

Yep.