Digitalization for the High Seas

The sun is shining in Hamburg, and the mild autumn air is in motion – even though I am perfectly equipped for rainy weather. In early October, shipbuilders from around the world gather in a conference hotel near the harbor for the CADMATIC Digital Wave Forum. The user meeting invites participants to experience CADMATIC’s CAD application for shipbuilding firsthand and to learn about current trends, product innovations, and new developments. The highlight: CADMATIC Wave, an integrated CAD/PLM solution specifically designed for shipbuilding and jointly developed by CADMATIC and CONTACT.

Model visualization simplifies data retrieval and collaboration

After our first coffee, we slowly make our way into the conference hall. The morning is filled with numbers and facts around CADMATIC’s digitalization strategy. In the afternoon, our Managing Director Maximilian Zachries presents CADMATIC Wave to the 200 participants. As we demonstrate the first functionalities of the integrated Product Data Management (PDM), some attendees quickly pull out their phones to snap a photo of the feature. I am somewhat excited – now it’s official. Now we also need the data model. And that isn’t quite so simple.

Cadmatic's Atte Peltola introduces the audience to Cadmatic Wave

CADMATIC’s Atte Peltola presents CADMATIC Wave. (© CADMATIC)

The resounding call for a data model for shipbuilding carries me through the three days in Hamburg. During my conversations with industry colleagues, it becomes evident that the information required and generated in the shipbuilding process must be able to be mapped within the model. Model-centric is the magic word: the ship’s geometry is visualized, including equipment, fittings, and logistics. Information can then be retrieved and added via the specific parts of the model. Model visualizations provide a shared and intuitive view of the ship for all involved trades, significantly simplifying information retrieval. This enhances the efficiency of engineering activities and collaboration, also with partners.

Basing a data model on ship geometry is challenging

Engaged in a discussion with a research associate from the Norwegian University of Science and Technology (NTNU), we stumble upon a question: Is the geometry model even suitable for generating a generic product structure for data storage in the PDM? After all, as a placeholder in a data model, there are quite a few locations in such a ship. And let me put it this way: data models are typically organized along the processes in product creation, not the geometry of a ship model. I am curious to see how we will solve this challenge in CADMATIC Wave.

The evening event takes place on the Cap San Diego, a museum ship in the Hamburg harbor. The rustic flair of a ship’s belly and the lavish buffet create a cozy atmosphere for lively conversations. We talk about life in Finland and Norway and the difference between information and data management. The evening ends stormy and rainy, and I finally put my rain gear to good use and return to the hotel dry and warm.

SEUS brings European shipbuilding to a new efficiency level

At the CADMATIC Digital Wave Forum, I also meet my consortium partners from the Smart European Shipbuilding (SEUS) project for the first time. Among them are representatives from NTNU and CADMATIC, as well as employees from two shipyards, the Norwegian Ulstein Group and the Spanish Astilleros Gondan SA. SEUS is an EU-funded research project with the goal of developing an integrated CAD and PLM solution for shipbuilding. This endeavor goes way beyond the functionalities we develop in CADMATIC Wave. For instance, we aim to incorporate knowledge management and utilize AI for searching within product data.

In this context, the broad positioning of our research department, CONTACT Research, works to our advantage. Our focus areas include not only Digital Lifecycle Management, where we conduct research on digitalization strategies for various industries, but also Artificial Intelligence. The AI product data search we aim to implement in SEUS allows us to bring our self-declared motto to life: “Bringing artificial intelligence into the engineering domains.”

As three days in Hamburg come to an end, three strong impressions remain:

  1. It is necessary to design an abstract data model for shipbuilding. One that contains the modules of a ship and yet can be customized to fit the specific needs of any shipbuilder. This data model must be closely linked to the development process.
  2. Personal exchange and meeting each other face to face have been an enriching experience for me in this new work area. This positive feeling motivates me for my future work in the SEUS project.
  3. In Hamburg, rain gear is a must.

Big, bigger, giant. The rise of giant AI models

The evolution of language models in the field of NLP (Natural Language Processing) has led to huge leaps in the accuracy of these models for specific tasks, especially since 2019, but also in the number and scope of the capabilities themselves. As an example, the GPT-2 and GPT-3 language models released with much media hype by OpenAI are now available for commercial use and have amazing capabilities both in type, scope, and accuracy, which I will discuss in another blog post. This was achieved in the case of GPT-3 by training using a model with 750 billion parameters on a data set of 570 GB. These are jaw-dropping values.

The larger the models, the higher the cost

However, the costs of training these models are also gigantic: Taking only the stated compute costs 1 for a complete training run, the total amount for training GPT-3 is 10 million USD 2, 3. In addition, there are further costs for pre-testing, storage, commodity costs for deployment, etc., which are likely to be in a similar amount. Over the past few years, the trend of building larger and larger models has been consistent, adding about an order of magnitude each year, i.e., the models are 10x larger than the year before.

Size of NLP models from 2018-2022. Parameter sizes are plotted logarithmically in units of billions. The red line represents the average growth: approx. 10-20 times larger models per year 2.

The next model of OpenAI GPT-4 is supposed to have about 100 trillion parameters (100 x 1012 ). For comparison, the human brain has about 100 billion neurons (100 x 109) which is 1000 times less. The theoretical basis for this gigantism is based on studies which show a clear scaling behavior between model size and performance 4. According to these studies, the so-called loss – a measure for the error of the predictions of the models – decreases by 1, if the model becomes 10 times larger. However, this only works if the computing power and the amount of training are also scaled upwards.

In addition to the enormous amounts of energy required to calculate these models and the associated CO2 footprint, which is assuming worrying proportions, there are direct economic consequences: Apparently, not only smaller companies cannot afford the cost of training such models, but also larger corporations are likely to balk at costs of $10 million, or $100 million or more in the future. Not to mention the necessary infrastructure and staffing for such an endeavor.

Monopoly position of the big players

This has a direct impact on availability: while the smaller models are now open source until the end of 2019 and can be freely accessed via specialized providers, this no longer applies to the larger models from around the end of 2020 (the appearance of GPT-2). OpenAI, for example, offers a commercialized API and only grants access through an approval process. On the one hand, this is convenient for developing applications with these NLP models, as the work of hosting and administration is eliminated; on the other hand, the barrier to entry for competitors in this market is so steep that essentially the super-big AI companies participate there: Google with OpenAI, Microsoft with Deepmind, and Alibaba.

The consequences of these monopoly positions of the leading AI companies are, as with every monopoly, pricing models without alternatives and rigid business practices. However, the capabilities of the current large language models such as GPT-3 and Megatron Turing NLG are already so impressive that it is foreseeable that in 10 years every company will probably need access to the current models for the most varied applications. Another problem is that the origin of the models from the American or Chinese area brings a large bias into the models, which on the one hand is clearly expressed in the fact that English or Chinese is the language with which the models work best. On the other hand, the training datasets that come from these cultural areas bring with them the very cultural tendencies from these spaces, so it is to be expected that other regions of the world will be underrepresented and continue to fall behind..

What can be done?

In my opinion, it is important to keep a careful eye on the development and to be more active in shaping the development of AI in the European area. In any case, a greater effort is needed to avoid dependence on monopolized AI providers in the long term. It is perhaps conceivable to involve national computing centers or research alliances that, united with companies, train and commercialize their own models and form a counterweight to American or Chinese companies. The next 10 years will be decisive here.

1 See here in section D as well as compute costs per GPU e.g. on Google Cloud approx. 1USD/hour for an NVIDIA V100
2 Calculation approach: V100 = 7 TFLOPs = 7 10^12 / s, 3.14 10^23 flops => 3.14 10^23/7×10^12 / 3600 = 10^7 hours = 10 million USD, details of the calculation and research of the parameters here.
3 see also here for comparison graph with older data.
4 see arxiv and Deepmind

What is Quantum Computing good for?

When it comes to quantum computing (QC), after the quite real breakthroughs in hardware and some spectacular announcements under titles like “Quantum Supremacy“, the usual hype cycle has developed with a phase of vague and exaggerated expectations. I would like to briefly outline here why the enormous effort is being made in this area and what realistic expectations lie behind it.

To understand the fundamental differences between QC and Classical Computing (CC), we first need to take a step back and ask on what basis both computing paradigms operate. For the CC, the basis is the universal Turing machine expressed in the ubiquitous von Neumann architecture. This may sound a bit outlandish, but in principle it is easy to understand: An universal Turing machine abstracts the fact of programming any algorithm into a classical computer (universal) that is somehow (classically) algorithmically expressible (Turing machine).

The vast majority of “algorithms” that are implemented in practice are simple sequences of actions that react to external events such as mouse clicks on a web page, transactions in a web store or messages from other computers in the network. A very very small, but important, number of programs do what is generally associated with the word algorithm, which is to perform arithmetic operations to solve a mathematical problem. The Turing machine is the adapted thought model for programming these problems and leads to programming languages having the constructs we are used to: loops, branches, elementary arithmetic operations etc.

What is the computing paradigm for a quantum computer?

A quantum computer is built up of quantum states that can be entangled with each other and evolved via quantum gates. This is also a bit off the wall, but simply means that a quantum computer is set to have an initial (quantum) state that evolves in time and is measured at the end. The paradigm for a quantum computer is therefore the Schrödinger equation, the fundamental equation of quantum mechanics. Even without understanding the details, it should be clear that everyday problems are difficult to squeeze into the formalism of quantum mechanics and this effort probably does not bring any profit: Quantum mechanics is just not the adjusted model of thought for the most (“everyday”) problems and it is also not more efficient in solving them.

So what can you do with it?

The answer is very simple: QC is essentially a method for quantum computing. Now this sounds redundant, but it means that a quantum computer is a universal machine to calculate quantum systems. This vision, formulated by Richard Feynman way back in 1981, is still followed by the logic of research today. Thus, it is not surprising that publications on the subject dealing with applications are located either in quantum chemistry or in the basic research of physics [5][6].

Why does this matter?

Because the classical computer is very inefficient in calculating or simulating quantum systems. This inefficiency is basically due to the mathematical structure of quantum mechanics and will not be solved by classical algorithms, no matter how good they are. In addition to basic research issues, QC is likely to become important in the hardware of classical computers, where miniaturization is pushing the limits of designing transistors on chips using classical theories of electricity. 

Besides, there are a lot of interesting connections to number theory and other various problems, which so far can be classified as interesting curiosities. Based on current knowledge, the connection to number theory alone could have a significant impact, because for historical reasons almost all practical asymmetric encryption schemes rely on algorithms that essentially assume (there is no proof) that prime number factorization cannot be solved efficiently with classical algorithms. Quantum computers can do this in principle but are far away from being able to do so in terms of hardware.