GenAI Myths You Should Know

Change your AI thinking around GenAI, make positive progress in your AI journey

Debunking Common GenAI Myths In Business AI Implementation

It is a common belief that GenAI resolves many issues associated with business AI implementation.

It demands less development time, is much more accessible to developers and business leaders alike, and GenAI models think like humans.

Easy, isn’t it?

Only that companies that began experimenting with GenAI a year ago are still grappling with it today. Companies I know that have been on the right AI implementation path are suddenly calling for help with their GenAI implementation.

And the amusing thing is that many of their issues are not technical, nor are their “AI problems” unsolvable.

Much of the issue is related to how the companies are treating and evaluating GenAI applications, made worse by the distracting information on social platforms, which is guiding these teams in the wrong direction.

In this article, we’ll focus on the three common myths surrounding GenAI that we’ve seen preventing companies from making positive progress in their AI journey.

3 GenAI Myths in Business AI Implementation

Myth #1: Software engineers are all we need

Ideate. build. deploy. That’s currently the standard thinking when it comes to building applications on top of Generative AI models such as LLMs. I can’t fully blame these teams, as vendors are selling their foundational models as such, insisting that building applications on top of LLMs is extremely straightforward. No AI expertise needed.

Because of this, many companies have resorted to leveraging their software engineers to build complex AI applications. This is ok, if you’re building a prototype, building tools for internal use cases or the risks when the AI system fails is not that significant.

In reality, expertise is crucial when building enterprise-scale applications —both AI expertise and subject matter expertise. These experts can help truly evaluate the performance of AI solutions, develop scalable solutions that work on more than 2-3 test cases, and continually monitor and enhance the quality of your AI-powered applications.

I’ve worked with many software engineers who have tried to build full-blown AI solutions but often struggle to scale these applications because of the lack of foundational data science thinking and skills.

Bottom line: If you’re building public-facing AI applications or applications meant for paying customers, it’s critical to get your AI solution, if not built by an AI expert, at least evaluated by an AI expert and later assessed by relevant subject matter experts before you go to market with your solution.

Myth #2: “Data Strategy” is a hoax

“Now that GenAI is here, it’s time to ditch our data strategy.”

This is common thinking amongst executives, some of whom I’ve had to correct politely.

To be fair, training data is optional for specific applications where GenAI works out of the box. However, for most large-scale enterprise applications, especially in niche domains such as healthcare and the government space, LLMs need fine-tuning to accomplish specific tasks at a reasonable accuracy.

So, to fine-tune, you’d need data. To generate data, you’d need to know what data to generate or collect in the first place and in what format. Above all, you need a strategy to support long-term data collection for various projects within the organization. Which data collection activity do you prioritize?

Further, data collection can take considerable time, depending on the rules and regulations surrounding specific data sources. In some cases, multiple data sources may need to be merged to acquire the data needed to support a project. Or new user interfaces may need to be developed to collect data directly from users.

All this to say, data collection, sharing and warehousing requires careful planning and prioritization and thus the need for a solid data strategy. I’ve seen talented data scientists and analysts leave their jobs at companies that don’t have data for them to access and use in their day-to-day work. Many AI projects have been significantly delayed due to the lack of a data collection plan. So why wait?

Bottom line: Data strategy is not a hoax or a plot to make money. It’s very much needed in a company’s quest to achieving true digital transformation.

Some of the elements that will make up your data strategy.

Myth #3: GenAI applications are “agentic” by default

Agentic AI is a type of AI that can understand complex tasks and work on them by itself, with little to no human help. It works like a human, understanding instructions given in simple language, setting goals, completing smaller tasks, and changing actions based on the situation.

When you think in terms of workflows in your business, such as finding relevant candidates in recruitment, this process may start from downloading relevant candidate resumes, reading and extracting content from resumes, filtering down resumes relevant to the jobs at hand and suggesting the best candidates for screening. This is what a truly Agentic AI is capable of doing without much supervision.

However, such an AI system is natively non-existent within LLMs. If vendors promise such systems, they’ve either trained these systems to perform some of these tasks OR, they’ve manually broken down the subtasks, used AI in the parts of the workflow where needed, and used other approaches where AI doesn’t fit.

So, what’s the big deal?

It is a big deal. Teams are getting extremely confused about how to identify AI-worthy opportunities, regardless of industry. They end up labeling entire workflows as potential AI projects when, in fact, only small parts of their workflow can be automated with AI. Such struggles are preventing these teams from making progress in project execution as they’re endlessly looking for GenAI systems that can automate entire workflows.

This type of agentic or RPA-like thinking, that AI can automate entire workflows, is brought upon by exposure to ChatGPT and Copilot, which seem to complete multiple tasks at a time.

In reality, ChatGPT-type LLMs, under the hood, are fine-tuned with hundreds and thousands of examples of how to complete and break down specific tasks that the user provides. So, while it is seemingly breaking larger and more complex problems down into subproblems on its own, in many cases, it actually already knows how to accomplish this through all the training it has received over time.

Bottom line: GenAI systems are very much task-oriented and are by no means truly agentic; they just have that illusion. This is not to say that agentic systems are not plausible; they very much are, but in today’s environment, when thinking about building GenAI-powered solutions, we still need to think in terms of tasks that the chosen LLMs are capable of handling and identifying the rest of the missing pieces of the puzzle to make your workflow more autonomous.

Key Takeaways

  1. Software engineers alone are not enough to build robust GenAI applications; AI expertise and subject matter knowledge are critical, especially for public-facing or customer-oriented applications.

  2. Despite the advent of Generative AI (GenAI), a comprehensive data strategy remains vital for long-term AI readiness, particularly for large-scale enterprise applications.

  3. GenAI systems, while appearing autonomous, are essentially task-oriented and require specific training to perform multiple tasks. They are not inherently “agentic” or capable of breaking down and accomplishing complex tasks without human intervention in custom workflows.

Not Sure Where AI Can Be Used in Your Business? Start With Our Bestseller.

The Business Case for AI: A Leader’s Guide to AI Strategies, Best Practices & Real-World Applications.

In this practical guide for business leaders, Kavita Ganesan takes the mystery out of implementing AI, showing you how to launch AI initiatives that get results. With real-world AI examples to spark your own ideas, you’ll learn how to identify high-impact AI opportunities, prepare for AI transitions, and measure your AI performance.