Developer Experience – from intuitive to complex

It sounds like an exciting vision of the future: users from every discipline can use ready-made program modules to quickly and easily create simulations, optimization tasks or analyses using artificial intelligence (AI). This can then also be implemented by departments whose employees do not have knowledge of a high-level programming language. That’s the idea. Of course, developers must first create these program modules so business users can assemble a solution that meets their requirements.

AI-powered analytics for the business department

Together with our partners, we are researching in the AI marketplace project to get closer to this vision. The name-giving goal is to develop AI applications in the field of the product development process and offer them on a central trading platform. The range will also include services such as seminars on selected AI topics or contract development as well as ready-made AI-supported apps and program blocks for very specific tasks. The development and reuse of the apps are currently being tested. The project team is evaluating the benefits and quality of the results at the same time.

Different programming levels for extended use

So that’ s the state of research, but how exactly do we at CONTACT support the development of reusable program modules, the integration of simulation models or AI-supported analysis methods? One example of practical application can be found in the area of predictive maintenance. Predictive maintenance means that maintenance periods do not take place at fixed intervals as before, but are calculated depending on operating data and events at the machine or plant. For such use cases, our Elements for IoT platform provides a solution to analyze operating data directly. The digital twin stores the data of the machine or plant in a unique context. This data can be directly retrieved and easily analyzed using block-based programming. With the no-code functionality of the IoT platform, departments can intuitively create digital twins, define automatic rules and monitor events, and create diagrams and dashboards – without writing a line of code.

In addition, there are applications around the Digital Twin that require more programming expertise. For this, the platform offers analysts the possibility to develop their models themselves in a higher programming language using a Jupyter Notebook or other analysis tools. Especially in the area of prototyping, Python is the language of choice. However, it is also possible to work with a compiler-based programming language such as C++. Continuous calculation of the predictions is then done by automating the models, which are available in a runtime environment. The code is executed either in the company’s own IT infrastructure or directly at the plant or machine in the field (edge).

We call this procedure low-code development, because only the code for developing the models is written. The data connection is made via the Digital Twin and is done configurationally. The piece of program code can then be reused as a program block for various applications, such as digital twins within a fleet.

CONTACT Elements for IoT is thus open to interactions at different levels: from the use of predefined building blocks (no-code), to the possibility of interacting with self-written program code (low-code), to the definition of own business objects and the extension of the platform based on Python.

AI – Where we are in the Hype Cycle and how it continues

While the artificial intelligence index shows that the increase of research articles and conferences in the field of AI continues, the media is slowly showing some fatigue in the face of the hype. So it’s time to take stock: What has been achieved? What is practically possible? And what is the way forward?

What has been achieved?

In the years 2018 and 2019 the previously developed methods for the application of neural networks (this is how I define AI here) were further refined and perfected. Whereas the focus was initially (2012-2016, Imagenet competition) on methods for image classification and processing and then on audio methods (2015-2017, launch of Alexa and other language assistants), major advances in text processing and generation were made in 2019 (NLP = natural language processing). Overall, the available technologies have been further improved and combined with a great deal of effort, especially from the major players (Google, Facebook, OpenAI, Microsoft).

What is practically possible?

The use of AI is still essentially limited to four areas of application:

  • Images: image recognition and segmentation
  • Audio: Conversion from speech to text and vice versa
  • NLP: word processing and generation
  • Labeled Data: Prediction of the label (e.g. price) from a set of features

This list is surprisingly short, measured by the attention AI receives in the media. The most impressive successes of AI, however, result from a combination of techniques such as speech assistants using a combination of audio, NLP and labeled data to convert the input into text, recognition of text intention with NLP and prediction of the speaker’s wish by using huge amounts of labeled data, meaning previous evaluations of similar utterances.

Decisive for the development of precisely these AI application fields were

  1. the existence of large quantities of freely available benchmark data sets (data sets for machine learning) on which algorithms have been developed and compared
  2. a large community of researchers who have jointly agreed on the benchmark data sets and compared their algorithms in public competitions (GLUE, Benchmarks AI, Machine Translation, etc.)
  3. a free availability of the developed models, which serve as a starting point for the practical application (exemplary Tensorflow Hub)

Based on these prerequisites one can quickly assess how realistic some marketing fantasies are. For example, there are neither benchmark data sets nor a community of researchers for the often strikingly presented field of application of predictive maintenance, and accordingly there are no models.

What’s next?

On the one hand, it is foreseeable that the further development in the AI area will certainly continue initially in the above-mentioned fields of application and continue to develop in the peripheral areas. On the other hand, areas are emerging which, similar to the above-mentioned fields of application, will be driven forward at the expense of large public and private funds (e.g. OpenAI and Deepmind are being subsidised by Elon Musk and Google with billions of euros respectively). An example of large investments in this area is certainly autonomous driving, but also the area of IoT. In total, I see the following areas developing strongly in 2020-2022:

  • The combination of reinforcement learning with AI areas for faster learning of models
  • A further strengthening in the area of autonomous driving resulting from the application and combination of AI and reinforcement learning
  • Breakthroughs in the generalization of the knowledge gained from image processing to 3D (Geometric Deep Learning and Graph Networks)
  • A fusion of traditional methods from statistics with neural networks
  • IoT time series (see below)

I see a big change coming with the rise of IoT and the associated sensor technology and data. By their very nature, IoT data are time series that must be filtered, combined, smoothed and enriched for evaluation. Relatively little specific has been done to date for this purpose. It could be that from 2020 – 2022, this topic could hold some surprising twists and breakthroughs for us. German industry in particular, which has benefited rather little from the initial developments in the field of AI, should find a promising area of application here.

Are data science platforms a good idea?

According to Karl Valentin: Platforms are beautiful and take a lot of work off your neck. The idea of platforms for automatic data analysis comes at just the right time. In line with this, Gartner has now published a “Magic Quadrant for Data Science and Machine Learning Platforms“. The document itself can only be viewed behind a paywall, but on the net some of the companies mentioned in the report offer access to the document by entering the address.

Gartner particularly emphasizes that such a platform should provide everything you need from a single source, unlike various individual components that are not directly coordinated with each other.

Sounds good to me! However, data science is not an area where you can magically get ahead with a tool or even a platform. The development of solutions – for example, for predictive maintenance of the machines offered by a company – goes through various phases, with cleaning/wrangling and preprocessing accounting for most of the work. In this area, ETL (Extract, Transform, Load) and visualization tools such as Tableau can be ranked. And beyond the imaginary comfort zone of platforms that managers imagine, database queries and scripts for transformation and aggregation in Python or R are simply the means of choice. A look at data science online tutorials from top providers like Coursera underlines the importance of these – well – down-to-earth tools. “Statistical analysis, Python programming with NumPy, pandas, matplotlib, and Seaborn, Advanced statistical analysis, Tableau, machine learning with stats models and scikit-learn, deep learning with TensorFlow” is one of Udemy’s course programs.

In addition, the projects often get stuck in this preliminary stage or are cancelled. There are many reasons for this:

  • no analytical/statistical approach can be found
  • the original idea proves to be unfeasible
  • the data is not available in the quantity or quality you need
  • simple analyses and visualizations are enough and everything else would be “oversized”.

This is no big deal, as it only means that the automated use of Machine Learning and AI does not make a data treasure out of every data set. If, however, the productive benefit becomes apparent, it is necessary to prepare for the production pipeline and time or resource constraints. Usually you start from scratch and reproduce everything again, e.g. in Tensorflow for neural networks or in custom libraries.

The misunderstanding is that a) Data Science can be driven up to productive use without a trace and b) a one-stop-shop for Data Science (here “platform”) is needed that does everything in one go. That will never happen.

This is really good news, because it means that organizations can achieve their first goals without having to resort to large platforms. The reasonably careful selection of suitable tools (many of them open source) helps to achieve this.

Also interesting:
In my video “AI Needs Strategy” I explain which steps companies can take to to use AI technology in a successful way.