With the digitization roadmap to a truly digital company

The digitization of business processes has received remarkable attention in recent years. On the one hand, the Corona pandemic ruthlessly exposed digital gaps, and on the other hand, in view of the political, social and ecological changes, companies are being called upon more than ever to act in a more agile and sustainable way. Motivation is high enough and progress in digitization is becoming more and more visible. However, implementation is usually less based on a digitization roadmap that shows the milestones and waypoints to the goal, but rather on a salami tactic.

Digitalization in small bites poses risks

When I talk to representatives of medium-sized companies about digitization, the answer is often: Yes, we do it all the time! Examples include actions such as the creation of policies to increase the use of Office software features throughout the company, the introduction of a ticket system, or the use of a requirements management tool in product development.

This reflects a common practice of carrying out digitization projects on a divisional or departmental basis, in relation to individual tasks or sub-processes. At first glance, it often seems attractive to plan and implement projects from a departmental or site perspective, because the coordination effort is lower and department-specific solutions can supposedly be implemented quickly.

In principle, implementing demanding projects in manageable steps is a sensible approach. So does generating benefits quickly and making digitization progress continuously visible. However, the fragmented approach also carries risks: This is when the target image of digitization is unclear and the path to achieving it is not adequately described. Here, there is a realistic risk of not achieving essential goals of digitization projects. For example, not exploiting the potential of new, digital business models and thus not driving forward the digital transformation of the company. Or not using the company-wide and cross-company data treasures if the focus is only on local optimization.

The benefits of a digitization roadmap

To put it up front: With a digitization roadmap, companies can minimize the above-mentioned risks with little effort. It provides a reliable, medium-term guideline for all digitization activities in the company, aligned with a clear target image. With its different perspectives on the topic of digitization, it addresses the specialist departments, IT and management. The digitization roadmap should contain some essential information:

  • What is the company’s level of digitization?
    The basis of the digitization roadmap is an inventory of the current level of digitization in the company. For this purpose, the existing target images, requirements, and activities in the various corporate divisions and hierarchies are reviewed. Common maturity models help to assess the company’s level of digitization.
  • What is the target scenario?
    Once the status quo has been established, a clear, coordinated target scenario for digitization can be drawn up. The target scenario contains an overview of the future digitally end-to-end business processes as well as the future application architecture and the necessary information services.
  • Which sub steps are necessary?
    Once the goal is clear, the next step is to define and describe the necessary subprojects. In order to prioritize the subprojects in a meaningful way, the required internal and external resources and the possible project risks are estimated. The information previously obtained from the inventory is also used to extrapolate the benefit and business potential of the individual digitization subprojects. This makes it possible to calculate business cases for the planned projects.
    The project team and management are thus able to decide on the subprojects and their prioritization according to objective cost/benefit criteria, resource availability and other company-specific parameters. In this way, today’s digitization bites become defined, evaluated subprojects within an overarching context.
  • What is the business case?
    The high degree of concretization of digitization activities, especially of the relevant business case, is an essential basis for reliable financing of digitization projects. For example, special IT project financiers offer flexible top-up leasing that adjusts the leasing rates to the expected increase in benefits. Or even the financing of internal personnel resources. With such financing models, digitization then even succeeds without any restrictions on liquidity.

Conclusion

In the past, only individual projects were often launched. Currently, however, more and more of our customers are taking advantage of strategic planning with digitization roadmaps. With little effort, they offer a reliable orientation for the digital transformation with a clear target picture, concrete business case and alternative financing options.

What is Material Data Management?

When someone asks me something about Material Data Management, I always counter by asking what exactly is meant by “material”. This may not be the answer the other person expects at that moment, but it saves us both long minutes of confusion and talking past each other. The reason: not all materials are the same.

About the ambiguity of language

As a Frenchman in Germany, I am used to the fact that ambiguity leads to misunderstandings. Some expressions cannot be translated one-to-one from one language to another – at least not in such a way that it is immediately clear to everyone what is meant. A well-known example is the word “Gemütlichkeit”. The term only exists in German. More insidious, however, are the so-called false friends: word pairs such as “gift” in English and “Gift” in German. They look the same, but the meaning is fundamentally different. Even as an experienced polyglot, one is not protected from this. For example, my French interlocutors may seem irritated when I say that something has “irrité” me, meaning that something has surprised me. However, they understand this to mean that I have got some kind of skin rash out of sheer annoyance.

What can lead to funny and even sometimes slightly embarrassing situations in everyday life often causes inefficiency in the working world. To find examples, we don’t even have to look in an international context: Even within a German-speaking organization, not everyone necessarily speaks the same language. This is not due to the strong dialects in many places, but to the disciplinary nature of the language: Different people with different qualifications or expertise can understand different things by the same word.

And that brings me to the topic of this article. More precisely, to the multilingual mesh and the interdisciplinary ambiguity of the word “material”, whose galactic confusion around the terminology I would like to resolve.

Material is not equal to material

Enterprise software is a lot about managing materials and their data. There are great solutions for this. They are called Materials Management or Materials Data Management or even Master Material Data Management. The names sound very similar and are often used synonymously in practice. Yet they refer to completely different things. Freely following the motto “material is equal to material”, it is overlooked that the word can have a different meaning for different disciplines and things are lumped together that have little to do with each other. Confusion and misunderstanding are guaranteed.

Differences within the disciplines

In production logistics or material requirements planning, a material is a logistical unit, a resource that is needed for some value-adding process. Goods that can be purchased, such as a screw, a flange, a spindle, a tire, and so on. The art of sensibly procuring, moving and storing materials is called “Materialwirtschaft” in German and Materials Management in English.

In the context of product development, materials in this sense do not play a role. Development is not interested in the hood and where it is stored, but only in its description. To put it in the language of information technology: Development defines classes, production logistics manages instances of these classes. However, the concept of material reappears here as well, because in linguistic usage, items, parts, and assemblies are readily called materials. The reason for this is that they become materials in the sense of production logistics at the interface between PLM and ERP. This gives rise to misleading terms such as Material Management or Material Data Management. It would be more correct to speak of Master Data Management in the sense of parts master management.

In engineering (including design and simulation), the word material describes the physical composition of an object in the sense of materials science or materials technology: i.e., whether an object is made of wood, PA66, Inconel, or GFRP, for example. This is obvious. The management of all information about materials and their properties is called Material Data Management. Confusingly, the acronym MDM also stands for Master Data Management, which is not particularly conducive to sharpening the terms.

Different disciplines, different meanings of the word material

Conclusion

The confusion is great. PLM solutions that are tailored to the respective disciplines provide a remedy. They serve the different requirements optimally and thus ensure better collaboration overall. With Master Data Management as a core PDM function, all parts master data can be kept consistent and managed efficiently. Modern Material Data Management stores all information on materials and serves as a reference for the entire product development process. Material Compliance helps document the quality-checked delivery of regulated materials and precursors and ensures that only approved substances are processed. With interfaces to ERP systems, digital materials (in the sense of development) then also easily make the step into the physical world and become materials in the sense of production logistics.

Big, bigger, giant. The rise of giant AI models

The evolution of language models in the field of NLP (Natural Language Processing) has led to huge leaps in the accuracy of these models for specific tasks, especially since 2019, but also in the number and scope of the capabilities themselves. As an example, the GPT-2 and GPT-3 language models released with much media hype by OpenAI are now available for commercial use and have amazing capabilities both in type, scope, and accuracy, which I will discuss in another blog post. This was achieved in the case of GPT-3 by training using a model with 750 billion parameters on a data set of 570 GB. These are jaw-dropping values.

The larger the models, the higher the cost

However, the costs of training these models are also gigantic: Taking only the stated compute costs 1 for a complete training run, the total amount for training GPT-3 is 10 million USD 2, 3. In addition, there are further costs for pre-testing, storage, commodity costs for deployment, etc., which are likely to be in a similar amount. Over the past few years, the trend of building larger and larger models has been consistent, adding about an order of magnitude each year, i.e., the models are 10x larger than the year before.

Size of NLP models from 2018-2022. Parameter sizes are plotted logarithmically in units of billions. The red line represents the average growth: approx. 10-20 times larger models per year 2.

The next model of OpenAI GPT-4 is supposed to have about 100 trillion parameters (100 x 1012 ). For comparison, the human brain has about 100 billion neurons (100 x 109) which is 1000 times less. The theoretical basis for this gigantism is based on studies which show a clear scaling behavior between model size and performance 4. According to these studies, the so-called loss – a measure for the error of the predictions of the models – decreases by 1, if the model becomes 10 times larger. However, this only works if the computing power and the amount of training are also scaled upwards.

In addition to the enormous amounts of energy required to calculate these models and the associated CO2 footprint, which is assuming worrying proportions, there are direct economic consequences: Apparently, not only smaller companies cannot afford the cost of training such models, but also larger corporations are likely to balk at costs of $10 million, or $100 million or more in the future. Not to mention the necessary infrastructure and staffing for such an endeavor.

Monopoly position of the big players

This has a direct impact on availability: while the smaller models are now open source until the end of 2019 and can be freely accessed via specialized providers, this no longer applies to the larger models from around the end of 2020 (the appearance of GPT-2). OpenAI, for example, offers a commercialized API and only grants access through an approval process. On the one hand, this is convenient for developing applications with these NLP models, as the work of hosting and administration is eliminated; on the other hand, the barrier to entry for competitors in this market is so steep that essentially the super-big AI companies participate there: Google with OpenAI, Microsoft with Deepmind, and Alibaba.

The consequences of these monopoly positions of the leading AI companies are, as with every monopoly, pricing models without alternatives and rigid business practices. However, the capabilities of the current large language models such as GPT-3 and Megatron Turing NLG are already so impressive that it is foreseeable that in 10 years every company will probably need access to the current models for the most varied applications. Another problem is that the origin of the models from the American or Chinese area brings a large bias into the models, which on the one hand is clearly expressed in the fact that English or Chinese is the language with which the models work best. On the other hand, the training datasets that come from these cultural areas bring with them the very cultural tendencies from these spaces, so it is to be expected that other regions of the world will be underrepresented and continue to fall behind..

What can be done?

In my opinion, it is important to keep a careful eye on the development and to be more active in shaping the development of AI in the European area. In any case, a greater effort is needed to avoid dependence on monopolized AI providers in the long term. It is perhaps conceivable to involve national computing centers or research alliances that, united with companies, train and commercialize their own models and form a counterweight to American or Chinese companies. The next 10 years will be decisive here.

1 See here in section D as well as compute costs per GPU e.g. on Google Cloud approx. 1USD/hour for an NVIDIA V100
2 Calculation approach: V100 = 7 TFLOPs = 7 10^12 / s, 3.14 10^23 flops => 3.14 10^23/7×10^12 / 3600 = 10^7 hours = 10 million USD, details of the calculation and research of the parameters here.
3 see also here for comparison graph with older data.
4 see arxiv and Deepmind