Facing AI Pushback From The Top? How Educating Your Executive Team Can Ease Buy-In

One of the problems I commonly hear from leaders (typically at the Manager and Director levels) looking at AI integration is the lack of executive buy-in.

 The question is, why are executives so hesitant?

There are several reasons for this. A common factor is fear.

Having spoken to multiple executive teams, I can attest to the fear factor. I even had a CEO of a mid-sized tech company question the growing number of hyper-parameters of LLMs. “What’s next, with GPT-5 and 6? Will humans become extinct?” Having worked in this area for close to two decades, I often find myself staring blankly at these questions. As a practitioner, to me, AI is nothing but a tool.

Not surprisingly, 42% of 119 CEOs interviewed at the Yale CEO summit in 2023, say that AI could destroy humanity in five to ten years [1].

There is also the fear of breaching data privacy and security, biased and unfair models, and all those ethical risks that come with AI implementation. The fear factor is made worse when industry pioneers echo similar negative sentiments.

But we know that AI is not all bad especially when you are thinking about how you’re going to use it to benefit the organization, its customers and society at large and how you’d handle the associated risks.

Fear aside, AI as a technology, especially with Generative AI in the mix, is not well understood. People either misinterpret it as the “new cryptocurrency” meant to lose steam over time or some tout it as the panacea to all software automation problems.

Before the rise in popularity of ChatGPT and its cousins, much of this information and misinformation was previously put out by well-intentioned marketers, sales teams, and the media. All of this has changed. IT consultants, software developers, cybersecurity personnel, and others who seem “technical” but don’t have any background in AI or have not implemented AI for production use cases have also started adding to the noise.

It’s sad to say, but much of the AI information released by these new experts is not based on first-hand experience or directly implementable advice.

As an example, I recently saw a social media post (as you’ll see below) stating the accuracy numbers of the different LLMs and their corresponding hallucination rate. What this post doesn’t state is:

  1. Accuracy on what tasks? (there are so many tasks you could use an LLM for, so how were these numbers computed?)

  2. How was the hallucination rate computed (is this a new metric? )

  3. More importantly, what’s the source of this data? (no links and no citations)

A misleading social media post. There are no citations and no explanation of the metrics and what they measure.

This could very well be data taken out of context from Meta’s or Microsoft’s research papers or recycled news from another influencer.

Whatever it is, this seemingly authoritative content is misleading. A leader new to AI may think that GPT-4 is the gold standard solution for all AI problems when, in fact, there are a series of cheaper but equally good LLM and non-LLM solutions for many AI problems.

All this is to say that it’s hard for leaders to determine which information is accurate and what they should trust.

While educating your executive team may seem easy, it’s often not implemented. However, it’s a critical step in getting past the AI hesitance and full AI buy-in. From my experience teaching and working with executives, I’ve repeatedly been told how having targeted content is so much more helpful in improving their understanding than reading scattered content from the Web.

Getting Started with Executive Education

To get started with executive education as a strategy to get AI buy-in and reduce resistance, here are few steps you can take.

Step 1: Get a team vested in the use of AI within the company. This can be people from your analytics team, product management team, etc. 

Step 2: Try to understand your executive team’s reasons for AI hesitance and assess their knowledge gaps. Do they know AI at a high level? Are they fearful because of the talk of an AI takeover? Get deep into their fears and misconceptions about AI. 

Step 3: Put together a topic list that would address your executive team’s biggest concerns, fears, and knowledge gaps. Suppose ethical issues and data privacy are a big concern. You can have a topic that covers the true risks of AI and practical mitigation strategies. You can also show examples of how other companies are handling those risks.

In the healthcare domain, for example, one big fear is the use of AI in clinical diagnosis. Using AI as a sole decision-maker presents an elevated level of risk that not many hospitals want to take on. However, if you demonstrate through valid use cases how AI systems are used mostly as assistants and as sole decision-makers only in low-risk situations, it starts painting a more plausible future with AI.

Step 4: Curate content and invite speakers. There are some topics that will resonate deeply with executives when presented by their internal teams. However, having a mix of internal content further reinforced by external speakers can drive home some crucial points, such as GenAI is not the only way to implement AI.  Or, jumping head first into AI without proper use cases can result in failed initiatives or initiatives with little to no impact.

The more you can help your executive team understand AI, how it reasons and how it’s built, where it truly helps, and how certain risks can be mitigated, the more approachable the technology will become. Of course, there will always be bad actors using the technology in harmful ways. But this is a problem with any technology, not just AI, and this might be another important point when communicating with them.

References

KEEP LEARNING & SUCCEED WITH AI

  • READ: Read The Business Case for AI to learn practical AI applications, immediately usable strategies, and best practices to be successful with AI. Available as audiobook, print, and eBook.

  • JUMPSTART AI WORKSHOPS: These hands-on workshops help your team discover lucrative AI opportunities, create actionable AI strategies, and learn the AI landscape to accelerate adoption.

Clients We Work With…

We work with established businesses globally that are looking to integrate AI or maximize their chances of succeeding with AI projects. Select organizations we’ve served or are serving through our work include:

  • McKesson

  • 3M Healthcare

  • McMaster-Carr

  • The Odevo Group

  • IBL Mauritius

  • The University of Sydney

  • Nuclear Regulatory Commission

  • And more…

Reply

or to participate.