Alex Kirsch
Independent Scientist
DE | EN

The Two Worlds of Machine Learning

14.05.2021
The machine learning hype has been raging for a decade or so, but the broad application of data science is not happening. A few observations and thoughts on the reasons for the discrepancy of public presentation and real application of machine learning.

Machine Learning is talked about a lot. Every day we can read about data science projects that are about to transform the future, latest tools to make machine learning even better, and political provisions to educate retarded German SMEs that simply seem not to get its incredible value.

There seem to be two completely different universes (or views on just one universe): one where data is the new gold and anybody who is not using it will perish mercilessly, the other one that lives on happily without TensorFlow. What is happening?

The Public Version

One of the two worlds is on public display: huge companies like Google or Facebook announce incredible successes with machine learning, particularly deep learning. They generously share their tools, such as TensorFlow, with the rest of the world.

Start up companies offering data science services have been springing up like mushrooms. They, as well as scientific institutions, are supported by government grants to make sure we are not completely falling behind "IT superpowers" such as the US or China.

Along the same lines, top managements of large corporations announce R&D spending on machine learning and the development of data strategies, reassuring shareholders that they are prepared for the upcoming data age.

Reading and hearing such news (for about a decade now), one would expect data analytics, especially machine learning, to have become an everyday technology, being used all around us.

The Private Version

Online workshops on applications of machine learning are constantly promoted by promising real-world examples. But in the workshop you are left with ideas and hypothetical use cases, not with running systems. An all-time favourite is the predictive maintenance idea of knowing when an industrial machine will break down. All you are ever told in workshops or newspaper articles is that some innovative company is starting a project. After more than ten years of the machine learning hype one would expect to get results rather than announcements.

Lately I have been talking to decision-makers and practitioners in several German SMEs, particularly in industrial machinery. All of them agreed that data availability and usage have been growing and will do so in the future. But they also agreed that machine learning plays only a small part, if any at all. When data is used, the techniques are much more down to earth: plotting numbers, calculating running averages, checking threshold values. All the companies I spoke to have seriously tried to find some use for machine learning, all of them were disappointed.

Who is lying?

I think the contradiction in machine learning between public presentation and its real use is due to two simple phenomena:

  1. all parties are simply following their specific interests, and
  2. most companies in the world have different challenges to master than the tech giants.

It is part of the business of tech giants to regularly prove their technological superiority. This can add to their brand reputation or support real business by offering cloud services or tools for machine learning and data analytics.

Their press releases are eagerly picked up by journalists who love to announce technological progress. Politicians consume the stories and turn them into quick answers for complex societal challenges.

For scientists such a situation is perfect to acquire grants. Those that have been working on machine learning take their chance to apply for funding or snatch a high-payed job in industry, while others flock to the machine learning communities to get their piece of the cake.

In addition, scientists mostly do not care (or know?) about life beyond their grant and publication list. All the SMEs I talked to had some negative experience when collaborating with scientists (usually the setup was students doing their theses at the company with an advisor at a university). The scientists were honestly astonished why companies would bother with using real data when the error rates are so much lower on synthetic data sets (I did not make this up, I was told this pattern more than once!).

So keeping the public version of the story up is an everyday life mixture of self-interest and ignorance. Nobody is actually lying.

But still the story is wrong. Machine learning, especially deep learning, is a last resort for tasks that cannot be solved well with other techniques. Take for example image classification: before the area of deep learning, for a computer to know what an image showed without having access to hand-crafted meta-information was more or less a matter of guessing. Deep learning has elevated the classification quality to something like educated guessing. Machines are still far from recognizing situations in the world as humans would (the whole task of image classification is a very boiled-down artificial task compared to how humans understand images), but for an image search on Google it can make a noticable improvement.

In contrast, industrial processes are already highly optimized and reliable. In such context you are not looking for a fallback solution, but for the last bits of reliable performance. Machine learning is more laborious and less robust than the available methods, so it is a bad substitute for hand-coded solutions.

The bottom line is that solutions developed at Google, Amazon or Facebook are not necessarily useful for others. Average SMEs or even industrial corporations simply have different tasks to solve.

The Consequences

There seems no way to reconcile the two worlds, one propagating a shining future, the other doing business as usual. In my perception the public version is losing its impetus and and I hope over time it will disappear. But this is a long and slow process.

Just as silently as the news releases the private and public investments into data strategies and education programs will be written off without anybody noticing.

The unfortunate victims are students who are currently tought in machine learning programs how to implement the "magic algorithms". Even if machine learning were of much use in industry, the algorithms are the least important part and students are not trained on the real issues of applying machine learning (since computer science professors hardly know themselves). We are now seeing a little army of data scientists emerging from universities that will have a hard time finding their place on the job market, while many other IT jobs are hard to fill.

← Zurück zur Blog-Übersicht