Buy or Build AI? How to Choose the Right Option for Business AI Integration

With the growing number of ready-to-use or semi-ready-to-use AI tools, buy or build AI questions are top of mind for many leaders. There’s a big push to start using AI immediately, but many leaders also understand that not all AI tools will fit their organization’s needs.

For example, one of our customers invested heavily in developing machine learning models internally, from scratch. However, after deployment, they were inundated with customer complaints due to inaccuracies in model output and needed a way out of this problem. 

Our evaluation pointed to several significant problems that would require the company to rebuild models entirely. To prevent further setbacks and contain costs, instead of investing more in in-house development, we helped them transition to a third-party solution, outperforming their existing one while keeping costs within expectations. 

There can be many such twists and turns when implementing AI. You may have budget limitations, need more data, or your company’s IT infrastructure may also be too “simple” to support the use of AI. 

You may also be faced with privacy concerns, cost complications, and other important organization-specific challenges, which could push you to either build from scratch or buy an off-the-shelf solution.

Therefore, the best strategy for integrating AI is highly context-dependent. 

In this article, we’ll examine the different ways to integrate AI and the pros and limitations, or things to consider, for each. This will help you assess what’s best for your organization.

Integrate Task-Specific Third Party APIs

Task-specific third-party APIs are AI solutions that solve targeted problems and often integrate into a large software application. For example, labeling documents with specific categories, recognizing entities in text data, recognizing people within images, and so on are examples of task-specific problems.  

Benefits of Task-Specific APIs

The benefit of a task-specific third-party API is that most of the machine learning work is already done for you, and your costs are relatively predictable if you understand your approximate usage patterns. You don’t need a team of data scientists to build the solution, nor do you need to understand ML algorithms to start using these APIs. Often, you can use the output directly within your applications without customization. Examples of such APIs include Amazon Textract, Amazon Transcribe, and Vertex AI Vision.  

In most cases, you also won’t have to worry about maintaining and monitoring models. This is mostly taken care of as these are managed services. You can also use your existing engineering and IT teams to help evaluate and integrate the API into your business systems. 

buy ai software, buy or build AI, buy or build ai model

Example architecture of how a podcast can be used to support search and analytics using Amazon Transcribe and Amazon Comprehend ML APIs. Source: aws.com

Limitations and Things to Consider

The downside of third-party task-specific APIs is that the solutions are often overgeneralized. They’re meant to work for everyone, and consequently, quality and accuracy are often at stake. The main reason for this is often that the underlying assumptions are not specific to your organization’s use cases and data. 

For example, suppose you’re trying to perform sentiment analysis on customer conversations from phone transcripts. Sentiment APIs that are primarily trained on tweets and short comments may not work well on this data as it doesn’t resemble the idiosyncracies of phone transcripts. This may lead you to think that AI doesn’t work when in reality, the particular API may be unsuitable for your use case.  

Additionally, third-party task-specific APIs are often available as a cloud service, where you need to send out data over the internet to get a set of predictions. This could mean you’re exposing confidential, sensitive, or proprietary data to a third-party service. It’s often unclear how your data is stored or used once the third-party service has collected them, unless it’s clearly specified in their terms and agreement. 

Bottom Line

Task-specific third-party APIs are an incredibly convenient way to integrate AI without hiring data scientists or understanding ML model development. Further, as they’re already fine-tuned for a specific task, the costs are also more predictable if you understand your usage. This convenience may, at times, come at the expense of accuracy, and data privacy. So think about these elements when considering task-specific third-party APIs.

Customize or Fine-Tune LLMs

There has been an increase in the number of AI models, also known as large language models (LLMs), capable of comprehending and producing language. These models are pre-trained on large data sets and can be accessed through a straightforward API call. Providers of such LLMs include OpenAI (the GPTs), HuggingFace (Bloom), and Cohere. There are also LLMs that can run locally on your hardware such as Alpaca (Stanford) and LLama (Meta). 

A Handful of LLM providers. Source: https://cobusgreyling.medium.com/the-large-language-model-landscape-9da7ee17710b  

The nice thing about LLM models is that they can execute various tasks based on instructions (aka instruction following or prompting) and with a few examples on what the output should look and feel like. You’re essentially using AI to develop AI. These tasks can range from text classification to entity extraction to writing emails, resumes, poems and ad copy.

Benefits of Customizing LLMs

The benefit of pre-trained language models for AI application development is the speed in developing a range of intelligent solutions using just ONE base model. You can use it to develop sentiment analysis applications, machine translation applications, generate ad copy, write essays, and more. Of course, you’ll have to write the instructions (or prompts) in a way that generates the desired output. 

The beauty of this is that you don’t need large amounts of training data; at times, you need no training data at all. Plus, anyone can develop these applications-it doesn’t have to be a machine learning engineer.

Limitations and Things to Consider

Despite the convenience, LLMs are only suitable for some use cases. One big known problem of LLMs is hallucination, where facts are added to the answer, or answers are “made up” when the right answer is not known. This is a huge risk for applications that are dependent on facts. This problem becomes especially prominent when the output is fairly open-ended. For example, when  ChatGPT was asked what’s “Alpaca in the context of LLMs,” here’s what it answered:

In the context of LLMs, “LLaMA” refers to “Low-Latency Large Language Model Inference Acceleration”, which is a system developed by Microsoft Research to enable the deployment of large language models (LLMs) on commodity hardware with low latency and high throughput.

LLaMA achieves this by utilizing a combination of model compression, quantization, and efficient hardware acceleration techniques to reduce the computational resources required for running LLMs. This makes it easier and more cost-effective for organizations to leverage the capabilities of LLMs in their applications, particularly those that require low-latency inference.

Overall, LLaMA is an important development in the field of LLM deployment, as it enables the efficient and scalable deployment of LLMs on commodity hardware, which can have a wide range of practical applications in natural language processing, speech recognition, and other related fields.

ChatGPT’s response to “What’s Alpaca in the context of LLMs”

In reality, LLaMA stands for Large Language Model Meta AI (not Low-Latency Large Language Model that ChatGPT claims) and was released by Meta AI (not Microsoft Research which ChatGPT claims) in February 2023. But, it’s right about the usage on commodity hardware.

So, ChatGPT made stuff up instead of saying, “I don’t know it.”  This is extremely risky in many applications. This is why controlling the output of these LLMs through prompting or other mechanisms is absolutely essential.

For example, when asked, “What is llama in the context of LLMs that was released in February 2023, ChatGPT changed its answer entirely.

As an AI language model, I must inform you that my previous statement was incorrect, as I had mentioned that LLaMA was developed by Microsoft Research. However, there is no such system developed by Microsoft Research with the name LLaMA.

Moreover, it is not possible for me to provide information about something that was released in February 2023, as it is a future date beyond my knowledge cutoff of September 2021. Therefore, I cannot provide an answer to your question as it contains inaccurate information. If you have any other questions, please let me know, and I will do my best to provide accurate and helpful information.

ChatGPT’s response to “What’s LLaMA in the context of LLMs that was released in February 2023” right after asking ChatGPT “What’s LLaMA in the context of LLMs”

In summary, if you ask the LLM to perform a novel task like predicting the risk of a patient having lung cancer based on the description of the patient’s medical history, it may very well do it, but the results could be completely unreliable as it may make guesses based on data it has seen. 

Further, from a cost perspective, fine-tuning and developing applications over pre-trained models means that you may be pouring money into LLM APIs much earlier in the game than with task-specific APIs where you clearly understand the required input and the output that it produces. 

Bottom Line

LLMs are very flexible in that they can be customized to perform many tasks that require human-level intelligence with simple instructions and prompting, using just one base model. However, if used incorrectly, these pre-trained models have the habit of trying to be overly smart and overconfident by providing answers that are way off from reality. This can have adverse effects, from people getting hurt due to Dr. ChatGPT’s diagnosis to students learning twisted historical facts from Prof. ChatGPT.

Further, unless you’re opting for the open-source LLMs, you could be pouring a considerable amount of money in trying to fine-tune and make LLMs work for your applications. From experience, this is not a route many startups can afford to take when there are more affordable and predictable ways of achieving the same results.

Use AI in Packaged Applications

Packaged AI applications simply mean that the AI solution exists within a larger software application. 

For example, Splunk, an IT operations software solution, uses AI behind the scenes to go from reactive to more preventative responses with predictive technologies. There are specific features within the application that use AI, including event noise reduction and predicting health scores before service outages. 

While occasionally, the features in such systems need to be trained on company-specific data, there are many cases where the AI features are ready for use right out of the box.

Benefits of Packaged AI

The benefit of using AI within prepackaged software is the convenience that comes with it. You don’t need to “integrate” the solution, you don’t have to customize it, and you don’t have to think about developers in the loop. You would just use the AI features as you would any software tool. Security is also less of a concern if these tools are used within company firewalls, as there’s no data leaving company databases to a third-party service.

Limitations and Things to Consider

With the convenience of prepackaged AI also comes the issue of usability. While some prepackaged AI tools will work seamlessly untouched, some struggle to make accurate predictions on your data. 

That’s because, like with the task-specific third-party API option, these AI tools are meant to work broadly, for every organization. This usually comes at the cost of the accuracy and reliability of the solution.

Bottom Line:

Prepackaged AI tools can often be used right out of the box and are super convenient for a company looking to leverage AI within its workflow. The only way to know if it will work for your use cases and your company data is to evaluate it for a period of time.   

Build From Scratch

Building from scratch means you’re creating custom models for your AI problems. This can be in the form of models developed from scratch using machine learning and NLP libraries, an open-source solution that you customize, models developed using auto-ml tools, or training your own LLMs and customizing them afterward for specific applications. Typically, the developer or development team you hire is responsible for the success or failure of the solution. 

Benefits of Building AI from Scratch

The single largest benefit of developing something from scratch is control–control over what you’re building, the quality of the solution, the data that goes into training or fine-tuning your models, and how the models are tested, validated, and integrated into your workflow. 

Building a tool from scratch is very beneficial in sensitive domains such as law enforcement, finance, and healthcare, where the level of control will allow you to account for appropriate fallbacks during model failure and minimize downstream issues such as those relating to unwanted biases. Further, some domains are so niche that the available AI tools may not work well in those scenarios. In such cases, the best option would be to custom-build.

Limitations and Things to Consider

To be able to build from scratch, you would need the relevant personnel to develop the solution and later integrate it into your workflow. So you’d have to think about whether you should hire an employee or outsource the development, as both have their pros and cons, and one might be a better option than the other, given the number of projects on hand and budget limitations. 

Also, building from scratch is often more expensive than leveraging an off-the-shelf tool, although, in the long term, the costs can become comparable. It also takes time to build custom solutions, so you’ll have to factor that in. 

If you don’t have the building blocks ready for development, it can take even more time than expected. For example, you may have to allocate three additional months into your development time just to account for data collection.

Bottom Line

Although building a solution from scratch can seem cumbersome and costly, in many instances, it may be necessary. Your AI problem may be too domain-specific to warrant a generalized solution, or the problem may require a significant amount of control in terms of the data it’s trained on and how it’s evaluated. So don’t rule out this option. There are strategic ways to develop AI solutions cost-effectively and incrementally improve these solutions over time. 

Reality: Hybrid Build 

This is something you may not expect, but for most of our clients, we use a combination of approaches to solve a single problem, as real-world problems are often not as clear-cut or well-defined as research problems. 

Let’s take an information extraction project from PDFs. The information extraction piece may require AI in the form of NLP, but perhaps some content can be extracted using open-source tools. Still, some may require custom models for the extraction of highly domain-specific content. 

We have even leveraged paid APIs for parts of the work and custom-developed AI tools for parts that didn’t have open-source or paid tools. Further, that’s only the information extraction piece. What about the data collection and integration? Those pieces may require their own custom algorithms in combination with open-source AI tools, paid APIs, or custom-built AI. This is the reality of solving a real-world problem. You’re right, it’s not as elegant as a mathematical formula.

Build or Buy AI Software: Summary

In this article, we explored four approaches to implementing AI, including:

  • Leveraging task-specific APIs

  • Using AI in prepackaged software

  • Customizing LLMs

  • Building from scratch

However, in the end, you may end up using one or more approaches for a single AI problem (hybrid build).

With different options for implementing AI in your business, choose the one that makes the most sense, given your circumstances.  

Every team, organization, or product will have its constraints, be it IT infrastructure constraints, budget constraints, low tolerable risk, or privacy concerns. Further, you may have a sizable problem that requires a combination of approaches to solve one problem.

So be highly strategic in approaching AI implementation, and remember that you can continually iterate and improve with time.

Buy or Build AI: How You Can Implement AI in your Business

Buy or Build AI: How You Can Implement AI in Your Business

References

Reply

or to participate.