AI Demystified for Executives
Ever wish you had a coach to help you decipher the AI buzz and hype so you could make better-informed business decisions about what, when, and why to use AI in your business? You'll get that when you tune into the AI Demystified for Executives podcast.
Andrew bridges the gap between complex AI concepts and practical business applications. With experience in both large corporations and high-growth startups, he excels at communicating with business and technical teams alike. As an author, industry thought leader and international speaker, Andrew serves as your trusted advisor and coach on this AI journey.
AI doesn't have to be complicated!
Here is what you can expect from this podcast:
We'll explore a monthly theme with specific topics each week. You'll also receive a free cheat sheet or guide for reference.
In each podcast, our goal is to ensure that you walk away understanding:
- One key AI concept
- Its business applicability
- An actionable takeaway
Our Monday episodes are 7-10 minutes long, perfect for getting up to speed on the week's most important "aha" moments during your commute or while sipping your morning coffee or tea.
On Wednesdays, our episodes run 30-40 minutes, providing a deeper dive into the week's topic. (Maybe a bit much if you're driving!)
Once a month, during the Wednesday podcast, we'll host an interview with either a business or technical professional related to the monthly theme.
AI Demystified for Executives
#10 - Deep Dive - The AI Usage Gap: From Shadow Innovation to Strategic Advantage
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
This episode covers three key themes: the gap between individual AI use and organizational strategy, the hidden potential of 'shadow AI,' and transitioning from ad hoc to strategic AI usage. It highlights communication, trust, and implementation as critical gaps hindering AI adoption and provides actionable steps for building an AI-ready culture. The episode stresses the importance of leadership alignment, up-skilling, and continuous learning, and concludes with an invitation to an upcoming workshop on AI strategy and implementation.
See a list of our upcoming workshops here: https://www.dragonflyrising.io/events
Dragonfly Risings School of Data proudly presents the AI demystified for executives podcast. This is the weekly podcast for executives who want to learn how to apply AI to their business. I'm your host, Andrew Psaltis. Today we'll tackle one of the most pressing challenges in modern organizations. Creating a culture that fully embraces and optimizes AI. We're going to cover three key themes. First the gap between individual AI use an organizational strategy. Then were going to talk about the hidden potential of shadow AI. Where employees experiment with AI tools informally. And often without leadership's knowledge or guidance. And then last we're going to cover how to transition from this fragmented ad hoc use. To unified strategic approach.
The potential of AI is undeniable. Yet the reality in many organizations. Paint's really a stark contrast to that. Recent studies show that 70% of employees never use AI at work. Despite its potential. To enhance productivity. By potentially up to 25%. only 33% of companies. Have taken significant steps to integrate AI into their operations.
This gap is not just a missed opportunity. It is a risk. Employees experimenting with AI. Without formal support, often use unvetted tools. Which could be introducing security risks. And inefficiencies.
And so while 65% of marketers and 64% of journalists report using AI tools like chat GPT. To improve their productivity. Nearly 70%. Of employees across industries do not use AI. At all.
And why does this matter?
There's several reasons why this really matters and we need to. Work through this and think about this and need to put this into plan for your business. Is the risks of unmanaged AI use. Their security breaches from unapproved tools. Data privacy concerns. From tools, handling sensitive information. Inconsistent outcomes due to a lack of standardization. I think about these tools.
Oftentimes we're talking about them hallucinating. How do you train and make sure your employees are aware of this? And the missed opportunities.
The potential enhanced productivity and decision-making.
Across your organization. Improve customer experience and streamline operations. It goes hand in hand with the productivity.
So let's dig into each of these a little bit more.
Let's look at these critical gaps, communication. Trust implementation. We'll discuss some. Practical next steps to build this culture. And then give you some actionable strategies to unlock your organization's AI potential.
The three gaps. It is communication. Trust and as implementation.
The communication gap.
The core challenges that you'll see across as, and what you're going to be facing.
We have. Misaligned metrics. Tricks.
So your data scientists often focus on the technical metrics.
For instance, model accuracy.
While executives look for ROI. Efficiency gains or revenue growth.
There's a misalignment here that we need to address. Of how do we have everyone have a shared language? These different terminologies really create barriers. To understanding and to collaboration.
A data scientist may celebrate 90% mall accuracy. But what does that mean to you as a business executive? When you're looking at. Questions. About the impact on quarterly revenue. Be really hard to go from model accuracy. To impact on quarterly revenue.
So a real miss here and a misalignment in our terms.
The solution for this and where we need to go. And where you really need to think about this across organization is. How do you develop shared metrics dashboards for say. That bridges divide. For instance, instead of focusing on technical performance.
Instead flip it, highlight tangible outcomes.
Like increased lead conversion.
It's something that's a business metric. So it's going to take work across your teams in the enterprise to how.
You need to have the conversation. To start to bridge this gap.
And have everyone talking the same language.
Think about it again from another example of fraud. Yeah. A fraud detection model achieves 99% precision. What does that mean? As a business leader. How do you interpret model precision? With reducing fraud related losses. I had the two totally separate things.
So we need to have that conversation.
I'd urge you to think about creating a. Shared metrics framework. And mapping these technical metrics to business KPIs.
Maybe there's a way that you could translate precision and recall into. Hey percentage reduction in false positives. If it was fraud.
And use data storytelling. As a way to bridge this gap.
Replacing, perhaps static dashboards with narratives Lincoln. AI results to business priorities. I think about going all the way down to use cases that. Are being implemented and mapping them back to business priorities.
As an example. Say your recommendation engine increased product views by 30%. Leading to a 12% sales uplift in Q3.
An action item or two of them for you. One. The urge you to host cross-functional workshops. To align on metrics and goals.
The first step in that workshop before you get aligned, may really just be understanding. Where everyone's coming from.
The second step and the second accident. And for you just handling this communication gap, that's exist.
It's a regular review dashboards to ensure they highlight business relevant outcomes.
The second gap. The first one's communication, the second gap is trust.
Many employees. Have fears about job security and displacement from AI.
There's concerns about bias, about lack of transparency.
And misuse of AI systems.
And some of this comes from a vaguer asset policies around AI.
So you can understand when there is a. Ambiguity and the policies when your AI strategy is not communicated. Across the organization. Employees will start to have fear that this could displace their jobs. If you think about the. Metrics that I provided at the beginning. About 70% of employees never use AI at work.
But if you're not using it and they're hearing about it and they're seeing. Things in the news or other things that are out there about how AI is going to replace jobs. You can start to understand how a lot of people may have fear that this could displace their job.
So it's important. To build this trust across the organization. Create clear AI usage guidelines.
This comes down to also fostering psychological safety. By encouraging the open dialogue about AI adoption. Never recognize employees experimenting responsibly with AI. Through structured programs. It really bring them on board and bring them along. All right. Bring them into the conversation. Everyone's in this together of how. Do you use these tools? To improve their productivity. To improve your business. To bring everyone. Along. Maybe you could think about some of these concerns. I think about a customer support representative. And they may fear being replaced by a chat bot. Without understanding how AI could augment their role. Perhaps not replace it.
Let's say you think about also, there's a lot of concern for. Ethical issues and biased hiring.
So part of the way I urge you to try and build this trust is through transparency. So really develop and share your clear AI usage policies. To find acceptable tools. Data privacy protocols. Countability structures. As an example, you could have a statement of our policy ensures all AI decisions and hiring. Or viewed by humans to mitigate bias.
Put people at ease. Might help people understand.
And I have formal recognition programs, encourage the usage. And so if you have employees that are innovating, With AI usage, maybe think about having a wards. Or spotlights.
Maybe you can introduce a data-driven innovation award. For employees that are leveraging AI effectively. I had, so make it part of the culture.
How do you have the implementation? Is the third gap here.
The challenge here is many times there's. Either. Centralized. AI teams. Sometimes a lab. In these may develop solutions without sufficient input from end-users. So this is more in that black box of there some AI team that's building this. But not getting input from end-users. Remember, this is a inflection point. Across all industries and across the workforce. Society.
Oh, the business users directly interacting with these tools.
From the AI standpoint and building products and solutions. I really need to think about getting input from end-users they're using these products as well. They're using the tools. That you may be using to build these products.
Then you have the opposite challenge of decentralized innovation, like the crowd. So in this case, Think about all the different groups that may be experimenting. And they're innovating. But they often lack the structure needed for scalability.
How do you balance this? Are those core problem. Of organizations struggling to balance a top-down initiatives. Centralized it centralized AI teams. With the bottom up innovations. That's so you think about these two sides of us? There's the side? Wait, you're going to have our AI team that builds this because that's how we've traditionally delivered solutions across our enterprise. That's worked for a very long time. In part of how that's worked is think about some of the technologies that you use or that was being used across it. There'd be no one in marketing.
That's going to go and use some of these sophisticated data tools. Whether they were databases or big data platforms or they're not going to use it. So it was all centralized and delivering. And then you had a decentralized we're now with AI. And marketing could go and use this. Think about it as. The same. Process that we went through. When cloud first arrived on the scene.
For a very long time. Across organizations. We're very used to. It controls all the infrastructure controls, all the computing resources. And as a marketing, a sales, a finance. Pick your department. Was needing computing resources. They're working with it to help deliver those, whether it's solutions, whether it's just access to resources. But always working with it. Cloud came around. Another marketer could go and use a credit card and start building a solution. Maybe they still probably couldn't use some of the tools that were being used in other parts of the organization. But now with AI, they can. Okay.
So we have that same thing.
Managing both of it. How do you leverage.
The innovation that's happening from the people doing the job.
So think about having a two-pronged approach. But the quote unquote lab, if you will, the centralized teams. And maybe they're the ones that really work on high risk. High complexity projects. Leverage them there. And then the crowd, if you will. People that are out doing the day-to-day jobs that are not part of the typical it infrastructure, it teams. All of them to experiment. With low risk applications. Good drafting content. Doing things of that nature. So think about the two sides and how do you really leverage both?
It's balancing control and flexibility. It really is. There needs to be.
Guardrails. Through governance frameworks. So you need to make sure that you have a governance framework in place. While allowing experimentation.
As an example. Maybe employees may use gender of AI. For internal memos. But customer facing materials require review. She gives them the opportunity to experiment. Two. Innovate. But in a. Controlled manner in a safer manner where it's just for internal communication.
But anything that's going to a customer that's customer facing that'll require review.
So you have these guardrails in place, maybe one way to think about doing that.
Okay. So those are the three. Gaps. If you will communication. Trust. And implementation.
Let's move on now to. Building. AI culture.
And this will help tackle.
Some of those gaps as well.
The first is leadership alignment.
There really needs to be. A clear vision.
And unified for how AI. Aligns with your business objectives.
Really need to articulate how AI fits into the overall mission. And then the long-term strategy.
And an example of this, as you could have a statement.
As simple as. We aim to use AI to streamline supply chain operations. Enabling faster delivery times and reduce cost.
It really articulate how. AI aligns.
It's important to demonstrate a commitment to ethical AI.
And what does that mean in your organization?
An action would be to establish an AI ethics board. Or a task force. To oversee sensitive applications.
Yeah, this is an emerging. Area in the industry. I know in the models are still, we're trying to understand some things from them. We're trying to make sure that we're using these ethically. It's a review hiring algorithms for bias.
And adjust processes to ensure fairness. And remember these use these tools as an assistant, not as a replacement.
And then build the agility.
Into the leadership practices. All right. Let leadership set the tone for how AI has perceived and implemented.
Empowering the managers.
To adapt quickly to AI advancements.
One. Possibility. Is, you could introduce quarterly AI readiness assessments for leadership teams to identify areas for improvement.
As action step.
It was a point in AI champion at the executive level to lead strategy and adoption. Maybe you're at the point of you've hired a chief AI officer. I'm happy you have not, or you're not going to. Rats really, a business decision that you need to figure out what makes sense in your organization. At a bare minimum though. Someone on the executive team. I should be that AI champion.
That leads a strategy and adoption.
In regularly communicate progress and updates. On AI initiatives. To make sure that you're building transparency across organization.
And if possible. Allocate budget specifically for experimentation with AI tools.
This is where it helps. With the dual strategy of quote unquote, the lab and the crowd innovating. Having a budget air tools. As it's moving so fast, allowing that experimentation. Really helping people drive. And really help them understand.
The second part of that. It's going to be upskilling and training. So the first part of building AI culture, leadership alignment. The second aspect of this culture. It's going to be upskilling and training. Really need to close the skills gap. With tiller training programs.
And customize them for various roles. For example, customer service. May want to teach teams how to work with air chatbots. To improve response times.
It is 70% of employees are not using AI in their jobs. That doesn't mean that they're not using AI personally. They may be using chatbots. Or they may not.
What if you're listening to this and you're involved in the technology space and business space. We may have this false assumption that everyone's using this. No everyone isn't using this. How do you teach them? And how do you help them when it's about using chatbots? So this comes down to prompting what you may have heard of, how do you interact with these chatbots to help. So teaching them and having that.
Providing leadership training on evaluating AI related risk and benefits. So this needs to start at the top. That there needs to be. Education. At the executive level on evaluating the related risks, the benefits. On the tools. I had. With marketing. Maybe it's training them on prompt engineering for generative tools.
Same with the customer service.
Ethical AI and risk awareness.
Across the board. All employees should be educated. On identifying and mitigating biases in AI systems.
Maybe as part of the overall train that you do, you include case studies. Of failed AI implementations.
So you can learn from past mistakes. They don't need to be your failed implementations. There's plenty of examples that are out there. That could be used.
And really important and a very big fan of. Matching. Theory with hands-on. Jason theory hands-on so have workshops. Have learning pathways for your teams to go through. But make sure they have access to the tools to experiment. It really needs to roll up their sleeves and get their hands dirty. So to say,
Perhaps as an example, you host a internal hackathons to. Encourage creative problem solving with AI.
So three action items for you here. Invest. In a training platform, accessible to all employees. There may need to be assessments that happen about AI literacy and where everyone is.
Possibly partner with external providers for specialized workshops on cutting edge AI tools. You may have. People on staff that have the expertise in a really steeped in some of these areas. If not, I'd encourage you possibly to look at bringing someone in to help. Bring the enterprise up to speed. You could create a mentorship opportunities.
Coaching, if you will. So to have everyone in this together, building trust, communication across the organization.
And then how do you measure success?
Across us. Again, I'd look at. Can you have adoption metrics? And attract the percentage of employees that are actively using AI tools. And so we can make sure that we close the three gaps. Why don't we start to build a culture.
And then tracking the percentage of employees are using them. And there may be a way having a dashboard in the last quarter, 6% of our marketing team. You generative AI for content creation.
Or operational efficiency. Measuring time saved on routine tasks.
There may be a way for you to calculate the reduction hour spent manually processing invoices. After automating with AI.
You could think about revenue impact. Attribute growth to AI driven strategies. Maybe you're tracking sales increases from personalized AI driven recommendations in an e-commerce product that you have.
And keep on top of employee sentiment. It's a survey employees on how they perceive a has impact on their work.
Really understanding them. That's so maybe an example of 80% of employees feel AI tools have made their roles more efficient and less repetitive.
We're only going to get to here. If we're closing the gap between communication, trust, awareness. We're leading. We're providing training.
Is your action items for that part? Develop a scorecard and incorporating adoption efficiency and satisfaction. And use a mix of qualitative. Feedback. Maybe surveys and quantitative data. ROI analysis. And share the results. Organization-wide. Let's celebrate wins. And refined strategies.
I communicate this across.
Everyone. Bring everyone along.
The practical steps that go from shadow to strategic.
Look at, identify any of these shadow AI activities. That may be happening now. This is the crowd innovating. Bring them into the fold. Identify who's doing it. Bring them in. If you understand that it's happening in marketing, for example, maybe a marketing manager is using Tools for copy writing.
Maybe you ask them to come and share their insights. So you could formalize best practices.
Maybe establish these AI centers of excellence. And you create teams that are responsible for guiding AI adoption, training and governance.
As an example, a team that evaluates vendor AI solutions to ensure alignment with the organizational goals.
Again, this could be. Traditional it. Teams. It can be maybe the marketing team. Or maybe even a better option. Is how do you have a planted team? Across different roles. And working together. They're going to see things each slightly differently.
So make sure to conduct. An organization wide survey to understand informal. AI usage.
Maybe you prioritize one or two AI use cases. With visible benefits to serve as success stories and share this across. And regularly review progress and adjust the scope.
In order to do this. And as you continue really think about. Defining clear KPIs. To reflect both direct outcomes. And secondary effects like employee satisfaction.
When we look forward. As the future vision and to where things are going. The future of AI. Is collaborative. And if you've picked up on this theme across what I've been talking about in this episode, those three gaps. It is. Communication trust awareness is bringing everyone together. There's then the. Leadership aspect. And the. Training aspect of building the culture.
This is collaborative. We are entering and have entered into a time. That, how we build the solutions becomes more and more collaborative. I like to think about it as we're augmenting, rather than replacing human roles.
As a leader, I think you have to focus on long-term cultural transformation.
Think about embedding continuous learning and adaptation into your organization. You know that collaborative evolution it's AI and humans working together. That's how we'll define the future productivity. Tasks like data analysis can be automated. While strategic decision-making. Really remains a human led activity. All right. So building that culture and that longterm transformation and building an AI ready culture really does require ongoing learning. Jody and commitment to ethical innovation. I'd really urge you to allocate resources for continues education. On AI trends and regulations. Develop a roadmap for scaling AI, use cases, organization wide. And foster a culture that celebrates curiosity. And adaptability. In closing, as we've discussed across this episode. Fostering. In AI, ready culture. It's not about adopting the latest tools. It's about creating an environment. We're innovation thrives, responsibly. Start off by assessing your organization's readiness. Defining clear strategies and powering your teams. Today we highlighted the three gaps. Communication trust and implementation. That really do hinder adoption. We discussed actionable steps to create a culture that supports AI. Focusing on leadership alignment. And upskilling and measurement. So this subscribe to this podcast for more insights into scaling AI effectively. And if you're ready to take the next step in your AI journey. Join our upcoming workshop on building your AI organizations. Some of the things you're going to get from that workshop. Is it custom AI strategy framework. And implementation roadmap. Risk mitigation plan. In a culture transformation blueprint. And this isn't just another workshop.
This is really your organizations. AI transformation accelerator. Find the link to register for the workshop. In the show notes below. Remember the future of AI. Belongs not to the first adopters. But to those who adopted wisely and responsibly. Thank you for tuning in to AI demystified for executives until next time. This is Andrew Psaltis. Helping you navigate the future of AI in business.