Artificial Intelligence: get it from the Cloud, or Develop it Yourself?

All of the big tech companies offer specialized artificial intelligence tools. IBM has Watson, Google offers Dialogflow and Vision, Microsoft has its Cognitive Services, Amazon has Rekognition, and Facebook has But what can you actually do with these tools, and when can you use them? And when is it better to develop an algorithm yourself? In this article, I will explain which factors you need to take into consideration in order to make the right choice.

By Ivo Fugers, Data Scientist at Ortec

There are countless options for building an AI application. The open-source world offers plenty of software solutions, such as R, Python, or Tensorflow, and the open-source community is constantly upgrading the collection with specialized packages that solve a specific problem. The big tech companies also offer tools that can further support the data science process, such as Azure Databricks or Google Cloud AI. Recently, the standard ‘cognitive’ APIs have joined the crowd: algorithms that are pre-trained for a specific purpose.

Data scientists always use the work of others. The question is however: how far will you go in using other people’s specialized work, and when should you take the reins yourself? The ultimate decision depends on a large number of factors, varying from the final application and the available budget, to your organization’s existing IT landscape. So let’s begin by looking at the ‘cognitive’ APIs. In general, the solutions available as an API can be divided into the following categories:

  • Vision: These are algorithms that can analyze images or videos, including face recognition, object recognition, or optical character (text) recognition. Facebook uses these types of algorithms to automatically tag you in photos, for example.
  • Speech: These are algorithms that can convert text into speech and vice-versa. They are a possible add on for chatbots, for example for use by telephone helpdesks that first identify the subject using speech recognition before transferring the call to a real assistant. Or actually having the chat conversation with the user.
  • Language: These algorithms are used in the automated comprehension of words, language, and conversations. They are essential components of chatbots, search engines, translation programs, and other applications that use natural language processing (NLP).
  • Personality: These algorithms can recognize emotion and sentiment in a conversation, or determine the user’s personality based on their word choice. Such algorithms can be used to support call center employees or personalize marketing campaigns.

Benefits of a standard AI tool

IBM, Amazon, Google and Microsoft offer suites of ready-made AI tools that provide several benefits. You can see these applications as standardized AI engines, which you can use for the eventual application. They are available via the cloud, and are therefore quick to use and easy to scale up. You can also benefit from the ‘AI arms race’ that seems to be waging among the tech giants at the moment. They all want to win the battle for the user, and they devote considerable energy into development, which makes AI systems increasingly more powerful. The applications in the fields of speech-, text-, or facial recognition are now so effective, that it no longer pays to develop them yourself. However, I have noticed that some developments in Dutch are lagging behind. Language and speech in the Dutch language sometimes leave much to be desired, but there is progress.

Disadvantages of a standard AI tool

The disadvantage of AI in the cloud is that they are often organized in highly general terms, and cannot be customized. That has consequences for the flexibility of the final application. Plus, you are dependent on your current IT landscape when choosing an API. For example, do you have good contracts with Microsoft then Azure Cognitive Services may be extra appealing, because the final application will integrate well with your current landscape, and the services will therefore be less expensive. However, that does not necessarily mean that Azure Cognitive Services is by definition the best solution.

An API isn’t quite a solution

The AI algorithm in an API can do the one thing it’s trained to do extremely well, but nothing else. Algorithms are often ascribed miraculous properties, but they almost always disappoint in the end. An API also has to land throughout an application. A tool has to be programmed, an infrastructure has to be organized, and you need data engineering, etc. Building a chatbot using a tech company’s standard APIs takes around an hour. But a chatbot that actually replaces 10% of your customer service would take at least half a year to build. Like other so-called ‘AI’ solutions, the tech company’s APIs cannot think or act themselves. AI isn’t magic, it’s just machine learning mixed with smart programming.

Reasons to choose in-house development

Several factors should be taken into consideration when choosing between a standard AI tool and developing an algorithm yourself. The final application is the most important of these factors. The more specific the application, the more it pays to develop an algorithm yourself. An insurer that wants to classify automotive claims automatically via a photo would do well to develop its own algorithm, for example, because there aren’t any standard ‘automotive damage algorithms’. The insurer could then choose to train existing, general image recognition algorithms using labelled data, such as images of cars with or without damage, but an algorithm developed specifically for that purpose would always perform better, mainly because you would be able to use human ‘deduction’. Another advantage is that you can then sell the application you’ve developed to other parties, or build together with other parties.

The choice for building one yourself is also logical if the final application is closely related to your core business., which develops everything itself, is an excellent example. For the past two years, fifty people have been working on their chatbot. This is a huge investment, but the application deals with the core of the company’s operations, so naturally they want to have full control over it. However, not every company has the same budget at its disposal as Budget is therefore absolutely a factor in the decision-making process, but it pays to realize that developing in-house is not always more expensive than using an existing algorithm. If you expect that the AI application will be used intensively, then it may be significantly cheaper in the long run to develop the algorithm yourself, because the existing tools are pay-for-use. Those costs can stack up, which makes it less attractive to scale up.


The AI tools offered by IBM, Amazon, Google, and Microsoft are advanced. If you are looking for an algorithm for speech- or facial recognition, then the best option is to get one from the cloud. However, it is important to realize that the differences between these tech partners are small. For example, Google is best in converting text images to text, and Amazon is number 1 in recognizing faces, but that could change in just a few months. My advice is therefore to test different APIs in a proof of concept before making a decision. It is also useful to organize your application in such a way that it is easy to switch to another API at a later moment in time. However, existing AI solutions may be too general for your specific goals, or you may expect to make intensive use of the algorithm, in which case in-house development is the right choice.



3 Reasons Why Not to Blindly Trust Predictions

Data Scientists are basically fortune tellers. They predict the future by looking at what happened in the past. However they don’t use a crystal ball; they use advanced mathematical and statistical models to find correlations and connections in large amounts of data. They project this on the present in order to predict what will happen with as much certainty as they can. That does not mean you should always blindly trust these prediction models – especially not when human lives are concerned. When using Artificial Intelligence (AI) in decision-making, always consider this trio of critical footnotes before drawing definitive conclusions.

By Erica D’Acunto, senior data scientist at ORTEC

Margin of error

Every AI prediction always has a margin of error, no matter how small. No matter how rich the historical data, and no matter how advanced the model that is applied to it: a 100% chance only exists in theory. This margin of error is often acceptable for predictions concerning capital. A bread factory wanting to predict demand in order to reduce waste? A model for investors to predict movements in exchange rates on the stock market? A predictive maintenance application that predicts when a part of machinery will have to be replaced? If the prediction model works with 98% accuracy, it naturally offers great added business value – that 2% is then negligible. But awareness of this margin of error, no matter how small, becomes much more important when the prediction affects human lives. Would you dare to go into traffic when there are self-driving vehicles that anticipate situations correctly in 98% of the cases? If tax evasion could be predicted with 98% certainty, should every person that comes out of the system be preventatively arrested? Of course, you should take data-supported advice seriously and weigh it in your final decision. However, be aware of the margin of error and do not blindly trust a prediction: do additional research and try to interpret the results.


Prediction models are as good as the data used to train them. Imagine you are teaching an AI model to distinguish cats from dogs and you then show it a picture of a fox: the model will not know what to do with it. In many cases, this ‘bias’ is however not as clear as in this example; it can be difficult to discover. Even an apparently perfect data set can produce confusing results, e.g. because a certain category is under-represented. A bias in a data set can also result from a bias in the knowledge or beliefs of the person that created the data set. As the presence of a bias is not always that clear, it is even more important to be able to recognize it. Take a web shop that wants to predict what type of shoes a certain customer likes so as to be able to make better recommendations. To train the machine learning algorithm, the customer’s purchasing history and the purchasing history of other customers is used. As many of the previous purchases were made by women, the training data represent more female than male preferences. That creates a bias in the data and thus also in the algorithm that was trained with this data. Eventually, this will produce a situation in which the algorithm’s recommendations for female customers are much better than the ones for male customers.

A few practical examples from predictors with a so-called bias are the antisemitic chatbot Tay or the LinkedIn search engine that has developed a preference for males. But in this case as well, it becomes more harmful when skewed predictors affect human lives. In the US, the police for example uses algorithms to ‘predict’ where to find criminal hotspots. Trained with historical crime data, these applications lead to an overrepresentation of police in poorer neighborhoods with a primarily black population. This in turn leads to the arrests of more black people, and this data is then entered back into the algorithm, creating a vicious cycle. Predictive policing, as this application is called, is used in the Netherlands as well.

The Black Box

On top of these two theoretical arguments, there is also a practical argument to not blindly trust the predictions of an algorithm. The fact is that it is often unclear how some of these algorithms reach their conclusions. Currently algorithms are even already used to predict whether someone is creditworthy or eligible for a job. With access to the underlying mathematical models that make these predictions, you could ascertain what kind of indicators are used by these systems. But increasingly often, the algorithms used are so complex that the choices they make can no longer be interpreted and therefore not checked. Not even by the people that built them. You might not worry about not knowing why an algorithm shows you a certain ad on the internet or how it determines whether you like a certain artist. A black box algorithm that works very well is useful, however if our goal is to learn more about a phenomenon, then we should put more effort in understanding how it draws its conclusions. Didn’t you always have to show your calculations on your math tests to prove that you understood how it worked?

Let’s take a medical example. In 2015, a deep learning algorithm was applied to a patient database with around 700,000 people to find patterns in it. Then the algorithm had to analyze the data of current patients to see what the algorithm had learned. The algorithm turned out to be capable of things including predicting, very accurately, when psychiatric patients would have a schizophrenic episode. That was a huge breakthrough for these patients, as thanks to the predictor the medication can now be administered before the episode starts instead of after it is too late. Mission successful, you might say. But how the algorithm reaches its conclusion is still a mystery, as is the actual cause of the episodes. What we know about the disorder has thus remained the same, bringing us no closer to preventing it.


Making a 100% valid and reliable prediction is unfortunately a utopia. There is always a margin of error, and models are as neutral as the data on which they are based. Therefore we use increasingly advanced models, to look for even stronger links and connections. The downside to that is that we can no longer follow the reasoning of these models at times, making their validity and reliability impossible to check. To get the most value out of AI, it is important that we acknowledge its limitations. So think about it, remain critical, and realize that sometimes a prediction should only be seen as a well-supported argument.



Three Tips to Boost the Success of Your Data Science Project

Data project failures at government agencies regularly hit the headlines. But the business world is facing these challenges as well. At the end of 2016 Gartner estimated that 60 percent of big data projects in 2017 wouldn’t make it past the experimental phase. Late last year another analyst revised that estimate, tweeting that the failure rate was no less than 85 percent. For someone who works in the data sector, these figures are distressing. It’s something I often discuss with my peers – how is it possible that so many projects end in failure? The general consensus is that there’s huge room for improvement in the initial phase. These three tips will boost the success of your data science project, even before the project is up and running.

By Patrick Hennen, Managing Partner Data Science & Consulting at ORTEC

TIP 1: Don’t be dazzled by the hype

Applications that use artificial intelligence (AI) are currently hip, hot and happening. This Google Trends graph shows that worldwide interest in AI has grown enormously over the past five years. Tech giants like Microsoft, Google, Facebook and IBM now present AI applications as the panacea for the business world. Want to gain better insights and earn more revenue faster? Simply download the algorithms and unleash them on your data. After all, anyone can use AI… can’t they? The message seems to be that the hype express has left the station and if your company’s not on board, you’ve missed the digital transformation boat.

Don’t be dazzled by the hype, though. I can’t deny that AI is a powerful technology that opens up numerous new possibilities for companies. But I’m increasingly hearing organizations say that what they want is an AI solution – and that’s the wrong way to go about things. AI applications are a means for solving a problem, not an end in themselves. At the end of the day, artificial intelligence may well be the most powerful tool for improving efficiency or effectiveness, but other methods may actually be better suited to you. For instance, are you looking to automatically count the number of vehicles in a car park? You could train a machine learning algorithm in image recognition to define objects as vehicles and then run this algorithm on camera images taken at the entrance. But it would probably be easier to just install road sensors in the ground.

TIP 2: Test and evaluate expertise during the selection process

Implementing a data science application is specialist work and is in every situation different. So the expertise you require will be broad ranging, covering both technical and business management. The first stage of a successful data science project is hence to select the right expertise. That’s easier said than done, particularly if you’re hiring a third party to do the work: there are many vendors who claim to have knowledge of data science, but lack the depth of knowledge or experience required. Do you find yourself dealing with someone who doesn’t have a demonstrable quantitative background? In that case you should be hearing alarm bells. Has someone switched to data science at a later stage of his or her career? In that case, do some extra work to verify his or her analytical and statistical skills. Make absolutely sure that you’re not dealing with someone who’s looking to make easy money. It’s important to note here is that a good provider will always be willing and able to explain what they do in simple, everyday language. So always keep asking, until you’re sure that you understand. Is your prospective vendor telling you that it’s too complex for you to grasp? In that case they’ve either got a hidden agenda or no idea what they’re talking about. You can find some examples of good questions to ask potential vendors to test their expertise in this article. So, take the selection process very seriously – that way, you’ll reap the benefits later. 

TIP 3: Make sure IT isn’t running the show

Your selection of external expertise isn’t the only factor that determines the success of a project – it’s also crucial that you involve the right stakeholders. So even before the project begins, it’s advisable to think about the composition of your internal team. When you’re doing this, bear in mind that you’re aiming to resolve a business problem. Data and your data science solution are the building materials and the tools that will help you achieve this. It’s in nobody’s interests to end up building an ivory tower, in which a few analysts and IT specialists are cloistered away, creating models without setting foot on the workfloor. So you should put together a team in which internal representatives of the business take the lead, and IT plays a supporting role. Internal or external data scientists can be deployed as a bridge between them, since they understand how both disciplines perceive things and can translate ideas between them. In larger projects it’s even advisable to make someone in the project team specifically responsible for liaising between business, data and IT: the ‘business translator’. Not only will this ensure that the process runs smoothly, but it will also help you to retain the data scientists’ expertise (see tip 2) in the long term. In addition, it’s advisable at the earliest stage possible – and long before the project begins – to get executive buy-in. Ensuring support from the top means that you’ve got a sponsor and that effective action can be taken when required. For instance you could organize an executive bootcamp during which new technologies and data science subjects are introduced and explained how they will be used in the project.

A good beginning…

It may be a cliché, but it’s true nevertheless: a good beginning is half the battle. That’s certainly the case when it comes to data science projects. Are you capable of recognizing hypes and buzzwords for what they are? Well informed and supported by the right external experts? And you’ve put your multidisciplinary team together? If that’s the case, then I’m certain that your project will be a success, and that your data science business case will fulfill its promise of adding value.



Behind the Scenes of a ‘Self-Learning’ Algorithm: the Magic of AI Explained

Artificial Intelligence (AI) is often portrayed as a kind of magic technology that will take over humanity in a fully autonomous and self-learning manner. In reality, however, AI is mainly a combination of machine learning and smart programming, which actually requires a lot of human effort. In this article, I will provide a glimpse of what’s hidden behind the scenes of popular ‘self-learning’ applications.

By Ivo Fugers, data scientist at ORTEC

One of the most well-known fields of AI research is machine learning. Machine learning can perhaps best be explained as a statistical computer model that is able to recognize patterns in data. Machine learning allows AI to ‘learn’ from previous observations. Therefore AI can perform tasks, without explicitly being programmed to perform that task (i.e. Machine Learning can classify the risk of an insurance taker without knowing that person, assuming he/she behaves in way it has observed in previous data). Eventually. Because before it can do so, it needs to go through a detailed training program that requires considerable human input. A human has to accurately define the problem to be solved, outline correct and incorrect answers in advance, label the training data (although this might be automated), and evaluate correct and incorrect actions. In addition, a large part of the work that’s involved in machine learning covers the proper configuration of an algorithm. Each case has its own optimal settings, which demands a lot of testing and research on the part of the Data Scientist. I’ve selected two real-world examples to illustrate this process.


Chatbots are automated – speaking or typing – conversation partners. This form of AI is at the moment frequently used to unburden customer service departments, for example by answering frequently asked questions, or by sorting callers to ensure that they reach the right person directly.

Behind the scenes of a self-learning algorithm the magic of AI explainedChatbots are programmed to recognize patterns in the input they receive. Based on those patterns, they then provide a pre-scripted answer. This already requires quite a bit of human labor, both manual and intellectual. When creating a new chatbot, you have to write out ‘conversation trees’. These include a wide range of input variations (ex.: ‘What is the weather forecast today?’, but also ‘Is it going to be hot today?’ or ‘Is it going to rain today?’), which should lead to the desired output (in this case, the weather forecast). It is no longer necessary to manually enter all of the input variations, because a good notation model allows for recognition of the most important patterns in a number of examples.

Chatbots are programmed to recognize patterns in the input they receive. Based on those patterns, they then provide a pre-scripted answer. This already requires quite a bit of human labor, both manual and intellectual. When creating a new chatbot, you have to write out ‘conversation trees’. These include a wide range of input variations (ex.: ‘What is the weather forecast today?’, but also ‘Is it going to be hot today?’ or ‘Is it going to rain today?’), which should lead to the desired output (in this case, the weather forecast). It is no longer necessary to manually enter all of the input variations, because a good annotation model allows for recognition of the most important patterns in a number of examples. But the response scripts can quickly become very complex: the question ‘Why is Rob not here, is he under the weather?’ will need to activate a completely different script than the weather forecast. A chatbot therefore does not simply react to keywords, but is able to recognize the relationships between different keywords. But that doesn’t mean the chatbot knows which relationships belong to which script: defining and labelling the keywords is pure human work (or ‘drudgery’, as the NRC (Dutch link) recently described it).

The next step is to make chatbots ‘smarter’ as they are used more often. We still need humans to define ‘good’ and ‘bad’ conversations, and to correct the algorithm as to which response to give. That way, the chatbot can ‘learn’ not to make the same mistake again, and advances the pattern that the chatbot recognizes in the input. This may sound like self-learning, but humans are constantly providing the necessary feedback. In addition to the Data Scientist, the end users are also frequently called on to provide that feedback. The Google Assistant is an excellent example. The chatbot recently began speaking Dutch, but it still isn’t very fluent. So in order to improve its proficiency, it regularly asks its users if it has done what they had expected (for example by giving a thumbs up or a thumbs down, see illustration). The more people that provide it with training data, the more accurate it can predict what the desired answer will be for the next question.

Predictive maintenance

Predicting the moment that different parts of a machine need maintenance or replacement can provide enormous improvements in efficiency. This application of machine learning, known as predictive maintenance, is extremely popular in the industrial sector. But before a machine becomes smart enough to tell its operators that it is time to inspect a pump or a bearing, many man-hours of work are required. That starts with collecting data about the variables that can have an effect on the life cycle of machine parts. There are not only countless types of machines (turbines, pumps, centrifuges, coolers), but they all have different motors (gas, electric, diesel), drive shafts, formats, ages, and materials. Moreover, there are also several different indicators for wear, varying from vibrations and temperature to rotation speed or pressure. So creating a solution that collects the right data for the algorithm to use requires a huge amount of human expertise. Collecting these data from different systems, cleaning the data and combining them is usually a very time-consuming process (around 80% of the Data Scientist’s time, according to Forbes). And still, no models have been created, and no insights have been generated!

In order to ensure that the complex predictive maintenance algorithm produces the correct results, we need a different training method than that is used for the chatbot. The chatbot ‘learns’ what it needs to do based on complete examples elaborated by humans: when someone says ‘x’, they want to know ‘y’. In predictive maintenance, so many variables have an influence on the need for maintenance that the algorithm often doesn’t even know what it should look for. It has to sort through the tangle of data to find the strongest indicators of a problem situation. In other words: we tell the algorithm that we want to know ‘y’, but we have only little idea what ‘x’ is, except for the fact that it is hidden in the data. The algorithm eventually telling us what we want to know may seem as if it is ‘self-learning’, but before it can do so, it requires humans again to teach the algorithm the values that belong to a machine that is operating ‘correctly’ or ‘incorrectly’. In other words: the algorithm can only start searching for the ‘x’ after a human tells it what the normal and problem situations are.

Greater value from machine learning

AI applications, such as chatbots and predictive maintenance, have a huge potential and can be a valuable tool for increasing efficiency. But before your company starts pursuing AI applications, it is important to understand that they involve more than just pressing a magic button. The process of producing a prediction from data has to be arranged extremely precisely. That requires expanding your knowledge, and perhaps even changing your operations, but it will certainly involve a lot of human effort. Once you understand that, you start the trajectory with the right expectations, leading to a greater chance of success at creating a machine learning application that is fully attuned to your operations and that delivers the promised added value.