AI Development Vs. Traditional Software Engineering: Distinguishing The Differences

This article explores six ways AI development differs from software engineering.

“AI development is a subset of software engineering!”

A manager I once worked with made sure he drove this point home.

None of the data scientists on the team appreciated this statement. They viewed machine learning and data science as forms of “science” and focused on scientific rigor rather than the quality or functionality of their code.

The manager had good intentions. He meant that we needed rigor at the source code level, which involves documenting code, unit testing, and modularizing code, on top of ensuring that the model is doing what it was meant to do.

The reality is that while AI development shares many aspects with sound software engineering practices, it has a slightly different development lifecycle. This article will explore the key differences between AI/ML development and traditional software engineering.

It is commonly thought that AI is a subset of software development. In reality, AI development shares many aspects with sound software engineering practices, but it has a slightly different development lifecycle.

Key Differences Between AI Development and Software Engineering

We will now examine six key areas where AI development diverges from traditional software development. The image below illustrates the two development lifecycles for comparison, and you’ll start noticing some overlaps and differences.

Traditional software development lifecycle vs. AI development lifecycle

1: AI Systems Thrive on Knowledge From Subject Matter Experts

While traditional software development primarily relies on software engineers, product managers, and QA personnel, having a subject matter expert (SME) in the loop is critical for AI development as AI systems attempt to mimic the SME’s thinking and decision-making.

By subject matter experts, I’m referring to individuals who have a deep understanding of the problem that the AI solution aims to address. These experts can typically solve the problem with high accuracy. For example, if it’s a customer service problem, the SMEs could be a group of service operators. If it’s a medical diagnosis tool, the SMEs would be a group of physicians or specialists in the field.

Correctly translating knowledge from these SMEs into AI models is crucial for the initiative’s success.

Imagine developing an AI model to predict a person’s eligibility for a home loan. Knowledge from the SME can guide the feature engineering process (signals for a model to learn from), the AI techniques to employ, and even the interpretation of results. These experts can provide invaluable insights into domain-specific nuances that can impact the performance of the AI solution.

For instance, a specific group of people may appear ineligible for a loan based on the available data. However, a distinguishing factor not clearly represented in the data might make them eligible. Incorporating this information into the development process can enhance accuracy.

This doesn’t mean that SMEs are irrelevant in traditional software development. However, their involvement is usually more at the functionality level. Essentially, the question is: Does the tool work as it was designed (in traditional software development), or does it think like a human (in the context of AI development)?

2: AI Systems Start and End With Experimentation

In a typical software engineering project, we frequently use a software library that somewhat addresses a particular task. For instance, if we need a software library to read and parse a file, we typically select the first one that works and proceed. This process doesn’t involve much “experimentation.”

The same does not apply to AI model development. You cannot simply use the first Large Language Model (LLM) or open-source solution that you think works and then move on.

As AI systems learn to mimic human-like thinking by making sense of data, they’re only able to complete tasks with a certain degree of confidence. In fact, models are really just making educated guesses.

As a result, the output quality can dramatically vary from model to model, even within the same class (think: Claude vs. ChatGPT) and from approach to approach (think: LLM vs. non-LLM based approaches) on the same task.

So, to develop a high-quality, deployment-ready solution, developers need to experiment with different tools, techniques, and models to find one or more that fit the bill.

Not only do you have to find the best model or approach for the AI solution, but once development is in progress, further experimentation is a must to ensure the results meet the application’s needs and reach a trade-off between accuracy and deployment timelines (see point 5).

3: AI Systems Are Heavily Data-Dependent

One of the first questions a machine learning engineer will ask you if you approach them with an AI project is, “Do you have the data necessary for the project.”

One of the last things that companies often have is data! Companies have good ideas, reliable subject matter experts, machine learning engineers, business stakeholders, and even users, but they often lack good, usable data.

As Matei Zaharia, the creator of Apache Spark, often says, “It’s your data, stupid.” This is in reference to why models don’t behave.

Data is at the front and center of AI development, as can be seen in the AI development lifecycle. General software engineering, on the other hand, primarily depends on the logic embedded in the code. If you have a big software development project with an AI component somewhere in there, that component would require data-driven development.

Even if you think you’re just going to prompt GenAI, truth be told, you need data to evaluate and fine-tune your solution. Otherwise, you can expect to be a household name due to AI gone wrong dramas, as will be discussed in point 5.

4: AI Systems Require Iterative Improvement

A recent client shared a mild AI mishap with me. They had hired a contractor to develop a sentiment analysis solution using ChatGPT. They integrated this solution into their main product, assuming the job was complete. However, a few months later, they had to rehire the contractor.

Why?

Upon launching the AI solution, the company discovered that their initial assumptions were incorrect. The output required adjustments to align with the product’s needs. Consequently, the contractor remained on the scene for several months until the solution fit correctly within their application.

Indeed, it’s not a myth that AI requires iterative development. If you look at the AI development lifecycle, iterative development is crucial during model development and testing. Iterations may also be required long after the AI model is deployed, especially when the data or expected model behaviors change.

When it comes to conventional software development, the requirements are actually more straightforward. As long as you stick to what the application needs, you can develop, test, and push code more swiftly into production systems.

However, with AI, you’d develop, test, fine-tune, validate, and then reiterate the process until you achieve a satisfactory level of performance and the output feels just right. This iterative process is inherent to AI development and contrasts with the more linear approach of traditional software development.

5: AI Systems Need Quantitative & Qualitative Evaluation

If you remember, the Tay twitter bot story from my book, or if you followed the Air Canada chatbot story where the Chatbot confidently promised a bereavement reimbursement

During development and post-development testing, you’re essentially stress-testing and playing “mind games” with the AI system to find its points of failure using quantitative and qualitative approaches.

AI failures are not only a media talking point, but in some applications, they can cause harm to people’s lives and livelihoods, such as in medical diagnosis and loan assignments. While AI systems will never be perfect, their behavior should be predictable, they should be free of known biases, and their points of failure should be well understood, documented, and communicated.

In contrast, testing, in the typical software development sense, is all about checking whether the application works as planned, often by passing a series of tests known as a test suite. This is also applicable to an AI tool, but in addition to functionality testing, you’re also performing quality testing. Consequently, it’s critical to have appropriate evaluation metrics and processes in place to assess the quality and accuracy of AI outputs.

6: AI Systems Must Be Monitored

Unlike traditional software systems, which, once tested and deployed, require minimal continuous monitoring, AI solutions require continuous monitoring due to their probabilistic nature.

Model degradation (left). Periodically refreshed model (right).

The performance of AI models can degrade over time if the data they are interacting with changes. This phenomenon, known as “model drift,” necessitates regular checks and potential retraining of models or data quality improvement to ensure they continue to perform optimally.

In contrast, traditional software systems, once developed and deployed, tend to be more “stable” as their outputs are more predictable, requiring less continuous output monitoring. The monitoring for these systems is more focused on availability and unexpected errors.

AI vs. Software Engineering: Summary

While AI development and software development intersect a great deal, in principle, the process of developing scalable and high-quality AI solutions is different from traditional software engineering.

The table below visually summarizes some of the key differences.

AI Development vs Traditional Software Engineering: Key Differences

Remember, understanding these fundamental differences between AI and traditional software development is essential for the effective planning, execution, and management of AI projects.

That’s all for now!

Not Sure Where AI Can Be Used in Your Business? Start With Our Bestseller.

The Business Case for AI: A Leader’s Guide to AI Strategies, Best Practices & Real-World Applications.

In this practical guide for business leaders, Kavita Ganesan takes the mystery out of implementing AI, showing you how to launch AI initiatives that get results. With real-world AI examples to spark your own ideas, you’ll learn how to identify high-impact AI opportunities, prepare for AI transitions, and measure your AI performance.