AI – Where we are in the Hype Cycle and how it continues

While the artificial intelligence index shows that the increase of research articles and conferences in the field of AI continues, the media is slowly showing some fatigue in the face of the hype. So it’s time to take stock: What has been achieved? What is practically possible? And what is the way forward?

What has been achieved?

In the years 2018 and 2019 the previously developed methods for the application of neural networks (this is how I define AI here) were further refined and perfected. Whereas the focus was initially (2012-2016, Imagenet competition) on methods for image classification and processing and then on audio methods (2015-2017, launch of Alexa and other language assistants), major advances in text processing and generation were made in 2019 (NLP = natural language processing). Overall, the available technologies have been further improved and combined with a great deal of effort, especially from the major players (Google, Facebook, OpenAI, Microsoft).

What is practically possible?

The use of AI is still essentially limited to four areas of application:

  • Images: image recognition and segmentation
  • Audio: Conversion from speech to text and vice versa
  • NLP: word processing and generation
  • Labeled Data: Prediction of the label (e.g. price) from a set of features

This list is surprisingly short, measured by the attention AI receives in the media. The most impressive successes of AI, however, result from a combination of techniques such as speech assistants using a combination of audio, NLP and labeled data to convert the input into text, recognition of text intention with NLP and prediction of the speaker’s wish by using huge amounts of labeled data, meaning previous evaluations of similar utterances.

Decisive for the development of precisely these AI application fields were

  1. the existence of large quantities of freely available benchmark data sets (data sets for machine learning) on which algorithms have been developed and compared
  2. a large community of researchers who have jointly agreed on the benchmark data sets and compared their algorithms in public competitions (GLUE, Benchmarks AI, Machine Translation, etc.)
  3. a free availability of the developed models, which serve as a starting point for the practical application (exemplary Tensorflow Hub)

Based on these prerequisites one can quickly assess how realistic some marketing fantasies are. For example, there are neither benchmark data sets nor a community of researchers for the often strikingly presented field of application of predictive maintenance, and accordingly there are no models.

What’s next?

On the one hand, it is foreseeable that the further development in the AI area will certainly continue initially in the above-mentioned fields of application and continue to develop in the peripheral areas. On the other hand, areas are emerging which, similar to the above-mentioned fields of application, will be driven forward at the expense of large public and private funds (e.g. OpenAI and Deepmind are being subsidised by Elon Musk and Google with billions of euros respectively). An example of large investments in this area is certainly autonomous driving, but also the area of IoT. In total, I see the following areas developing strongly in 2020-2022:

  • The combination of reinforcement learning with AI areas for faster learning of models
  • A further strengthening in the area of autonomous driving resulting from the application and combination of AI and reinforcement learning
  • Breakthroughs in the generalization of the knowledge gained from image processing to 3D (Geometric Deep Learning and Graph Networks)
  • A fusion of traditional methods from statistics with neural networks
  • IoT time series (see below)

I see a big change coming with the rise of IoT and the associated sensor technology and data. By their very nature, IoT data are time series that must be filtered, combined, smoothed and enriched for evaluation. Relatively little specific has been done to date for this purpose. It could be that from 2020 – 2022, this topic could hold some surprising twists and breakthroughs for us. German industry in particular, which has benefited rather little from the initial developments in the field of AI, should find a promising area of application here.

Are data science platforms a good idea?

According to Karl Valentin: Platforms are beautiful and take a lot of work off your neck. The idea of platforms for automatic data analysis comes at just the right time. In line with this, Gartner has now published a “Magic Quadrant for Data Science and Machine Learning Platforms”. The document itself can only be viewed behind a paywall, but on the net some of the companies mentioned in the report offer access to the document by entering the address.

Gartner particularly emphasizes that such a platform should provide everything you need from a single source, unlike various individual components that are not directly coordinated with each other.

Sounds good to me! However, data science is not an area where you can magically get ahead with a tool or even a platform. The development of solutions – for example, for predictive maintenance of the machines offered by a company – goes through various phases, with cleaning/wrangling and preprocessing accounting for most of the work. In this area, ETL (Extract, Transform, Load) and visualization tools such as Tableau can be ranked. And beyond the imaginary comfort zone of platforms that managers imagine, database queries and scripts for transformation and aggregation in Python or R are simply the means of choice. A look at data science online tutorials from top providers like Coursera underlines the importance of these – well – down-to-earth tools. “Statistical analysis, Python programming with NumPy, pandas, matplotlib, and Seaborn, Advanced statistical analysis, Tableau, machine learning with stats models and scikit-learn, deep learning with TensorFlow” is one of Udemy’s course programs.

In addition, the projects often get stuck in this preliminary stage or are cancelled. There are many reasons for this:

  • no analytical/statistical approach can be found
  • the original idea proves to be unfeasible
  • the data is not available in the quantity or quality you need
  • simple analyses and visualizations are enough and everything else would be “oversized”.

This is no big deal, as it only means that the automated use of Machine Learning and AI does not make a data treasure out of every data set. If, however, the productive benefit becomes apparent, it is necessary to prepare for the production pipeline and time or resource constraints. Usually you start from scratch and reproduce everything again, e.g. in Tensorflow for neural networks or in custom libraries.

The misunderstanding is that a) Data Science can be driven up to productive use without a trace and b) a one-stop-shop for Data Science (here “platform”) is needed that does everything in one go. That will never happen.

This is really good news, because it means that organizations can achieve their first goals without having to resort to large platforms. The reasonably careful selection of suitable tools (many of them open source) helps to achieve this.