Artificial Intelligence: get it from the Cloud, or Develop it Yourself?
All of the big tech companies offer specialized artificial intelligence tools. IBM has Watson, Google offers Dialogflow and Vision, Microsoft has its Cognitive Services, Amazon has Rekognition, and Facebook has Wit.ai. But what can you actually do with these tools, and when can you use them? And when is it better to develop an algorithm yourself? In this article, I will explain which factors you need to take into consideration in order to make the right choice.
By Ivo Fugers, Data Scientist at Ortec
There are countless options for building an AI application. The open-source world offers plenty of software solutions, such as R, Python, or Tensorflow, and the open-source community is constantly upgrading the collection with specialized packages that solve a specific problem. The big tech companies also offer tools that can further support the data science process, such as Azure Databricks or Google Cloud AI. Recently, the standard ‘cognitive’ APIs have joined the crowd: algorithms that are pre-trained for a specific purpose.
Data scientists always use the work of others. The question is however: how far will you go in using other people’s specialized work, and when should you take the reins yourself? The ultimate decision depends on a large number of factors, varying from the final application and the available budget, to your organization’s existing IT landscape. So let’s begin by looking at the ‘cognitive’ APIs. In general, the solutions available as an API can be divided into the following categories:
- Vision: These are algorithms that can analyze images or videos, including face recognition, object recognition, or optical character (text) recognition. Facebook uses these types of algorithms to automatically tag you in photos, for example.
- Speech: These are algorithms that can convert text into speech and vice-versa. They are a possible add on for chatbots, for example for use by telephone helpdesks that first identify the subject using speech recognition before transferring the call to a real assistant. Or actually having the chat conversation with the user.
- Language: These algorithms are used in the automated comprehension of words, language, and conversations. They are essential components of chatbots, search engines, translation programs, and other applications that use natural language processing (NLP).
- Personality: These algorithms can recognize emotion and sentiment in a conversation, or determine the user’s personality based on their word choice. Such algorithms can be used to support call center employees or personalize marketing campaigns.
Benefits of a standard AI tool
IBM, Amazon, Google and Microsoft offer suites of ready-made AI tools that provide several benefits. You can see these applications as standardized AI engines, which you can use for the eventual application. They are available via the cloud, and are therefore quick to use and easy to scale up. You can also benefit from the ‘AI arms race’ that seems to be waging among the tech giants at the moment. They all want to win the battle for the user, and they devote considerable energy into development, which makes AI systems increasingly more powerful. The applications in the fields of speech-, text-, or facial recognition are now so effective, that it no longer pays to develop them yourself. However, I have noticed that some developments in Dutch are lagging behind. Language and speech in the Dutch language sometimes leave much to be desired, but there is progress.
Disadvantages of a standard AI tool
The disadvantage of AI in the cloud is that they are often organized in highly general terms, and cannot be customized. That has consequences for the flexibility of the final application. Plus, you are dependent on your current IT landscape when choosing an API. For example, do you have good contracts with Microsoft then Azure Cognitive Services may be extra appealing, because the final application will integrate well with your current landscape, and the services will therefore be less expensive. However, that does not necessarily mean that Azure Cognitive Services is by definition the best solution.
An API isn’t quite a solution
The AI algorithm in an API can do the one thing it’s trained to do extremely well, but nothing else. Algorithms are often ascribed miraculous properties, but they almost always disappoint in the end. An API also has to land throughout an application. A tool has to be programmed, an infrastructure has to be organized, and you need data engineering, etc. Building a chatbot using a tech company’s standard APIs takes around an hour. But a chatbot that actually replaces 10% of your customer service would take at least half a year to build. Like other so-called ‘AI’ solutions, the tech company’s APIs cannot think or act themselves. AI isn’t magic, it’s just machine learning mixed with smart programming.
Reasons to choose in-house development
Several factors should be taken into consideration when choosing between a standard AI tool and developing an algorithm yourself. The final application is the most important of these factors. The more specific the application, the more it pays to develop an algorithm yourself. An insurer that wants to classify automotive claims automatically via a photo would do well to develop its own algorithm, for example, because there aren’t any standard ‘automotive damage algorithms’. The insurer could then choose to train existing, general image recognition algorithms using labelled data, such as images of cars with or without damage, but an algorithm developed specifically for that purpose would always perform better, mainly because you would be able to use human ‘deduction’. Another advantage is that you can then sell the application you’ve developed to other parties, or build together with other parties.
The choice for building one yourself is also logical if the final application is closely related to your core business. Booking.com, which develops everything itself, is an excellent example. For the past two years, fifty people have been working on their chatbot. This is a huge investment, but the application deals with the core of the company’s operations, so naturally they want to have full control over it. However, not every company has the same budget at its disposal as Booking.com. Budget is therefore absolutely a factor in the decision-making process, but it pays to realize that developing in-house is not always more expensive than using an existing algorithm. If you expect that the AI application will be used intensively, then it may be significantly cheaper in the long run to develop the algorithm yourself, because the existing tools are pay-for-use. Those costs can stack up, which makes it less attractive to scale up.
The AI tools offered by IBM, Amazon, Google, and Microsoft are advanced. If you are looking for an algorithm for speech- or facial recognition, then the best option is to get one from the cloud. However, it is important to realize that the differences between these tech partners are small. For example, Google is best in converting text images to text, and Amazon is number 1 in recognizing faces, but that could change in just a few months. My advice is therefore to test different APIs in a proof of concept before making a decision. It is also useful to organize your application in such a way that it is easy to switch to another API at a later moment in time. However, existing AI solutions may be too general for your specific goals, or you may expect to make intensive use of the algorithm, in which case in-house development is the right choice.