With the rise of modern AI systems, you often hear phrases like, “The text is converted into an embedding…” – especially when working with large language models (LLMs). However, embeddings are not limited to text; they are vector representations for all types of data.
Deep learning has evolved significantly in recent years, particularly with the training of large models on large datasets. These models generate versatile embeddings that prove useful across many domains. Since most developers lack the resources to train their own models, they use pre-trained ones.
Many AI systems follow this basic workflow:
Input → API (to large deep model) → Embeddings → Embeddings are processed → Output
In this blog post, we take a closer look at this fundamental component of AI systems.
What are embeddings?
Simply put, an embedding is a kind of digital summary: a sequence of numbers that captures the characteristics of an object, whether it is text, an image, or audio. Similar objects have embeddings that are close to each other in the vector space.
Technically speaking, embeddings are vector representations of data. They are based on a mapping (embedder, encoder) that functions like a translator. Modern embeddings are often created using deep neural networks, which reduce complex data to a lower dimension. However, some information is lost through compression, meaning that the original input cannot always be exactly reconstructed from an embedding.
How do embeddings work?
Embeddings are not a new invention, but deep learning has significantly improved them. Users generate them either manually or automatically through machine learning. Early methods like Bag-of-Words or One-Hot Encoding are simple approaches that represent words by counting their occurrences or using binary vectors.
Today, neural networks handle this process. Models like Word2Vec or GloVe automatically learn the meaning of and relationships between words. In image processing, deep learning models identify key points and extract features.
Why are embeddings useful?
Because almost any type of data can be represented with embeddings – text, images, audio, videos, graphs, and more. In a lower-dimensional vector space, tasks such as similarity search or classification are easier to solve.
For example, if you want to determine which word in a sentence does not fit with the others, embeddings allow you to represent the words as vectors, compare them, and identify the “outliers”. Additionally, embeddings enable connections between different formats. For example, a text query also finds images and videos.
In many cases, you do not need to create embeddings from scratch. There are numerous pre-trained models available, from ChatGPT to image models like ResNet. These can be adapted accordingly for specialized domains or tasks.
Small numbers, big impact
Embeddings have become one of the buzzwords in AI development. The idea is simple: transforming complex data into compact vectors that make it easier to solve tasks like detecting differences and similarities. Developers can choose between pre-trained embeddings or training their own models. Embeddings also enable different modalities (text, images, videos, audio, etc.) to be represented within the same vector space, making them an essential tool in AI.
Research in the field of Artificial Intelligence (AI) is challenging but full of potential – especially for a new team. When CONTACT Research was formed in 2022, AI was designated as one of four central research areas right from the start. Initially, we concentrated on smaller projects, including traditional data analysis. However, with the growing popularity of ChatGPT, we shifted our attention to Large Language Models (LLMs) and took the opportunity to work with cutting-edge tools and technologies in this promising field. But as a research team, one critical question emerged: Where do we get started?
Here, we share some of our experiences which can serve as guidance to others embarking on their AI journey.
The beginning: Why similarity search became our starting point
From the outset, our goal was clear: we wanted more than just a research project, we aimed for a real use case that could ideally be integrated directly into our software. To get started quickly, we opted for small experiments and looked for a specific problem that we could solve step by step.
Our software stores vast amounts of data, from product information to project details. Powerful search capabilities make a decisive difference here. Our existing search function did not recognize synonyms or natural language, sometimes missing what users were really looking for. Together with valuable feedback, this quickly led to the conclusion that similarity search is an ideal starting point and should therefore be our first research topic. An LLM has the power to elevate our search functionality to a new level.
The right data makes the difference
Our vision was to make knowledge from various sources such as manuals, tutorials, and specifications easily accessible by asking a simple question. The first and most crucial step was to identify an appropriate data source: one large enough to provide meaningful results but not so extensive that resource constraints would impede progress. In addition, the dataset needed to be of high quality and easily available.
For the experiment, we chose the web-based documentation of our software. It contains no confidential information and is accessible to customers and partners. Initial experiments with this dataset quickly delivered promising results, so we intensified the development of a semantic search application.
What is semantic search?
In short, unlike the classic keyword search, semantic search also recognizes related terms and expands queries to include contextually-related results – even if these are phrased differently. How does this work? In our first step with semantic indexing, the LLM converts the content of source texts into vectors and saves them in a database. Search queries are similarly transformed into vectors, which are then compared to stored vectors using a “nearest neighbor” search. The LLM returns the results as a sorted list with links to the documentation.
Plan your infrastructure carefully!
Implementing our project required numerous technical and strategic decisions. For the pipeline that processes the data, LangChain best met our requirements. The hardware also poses challenges: for text volumes of this scale, laptops are insufficient, so servers or cloud infrastructure are required. A well-structured database is another critical factor for successful implementation.
Success through teamwork: Focusing on data, scope, and vision
Success in AI projects depends on more than just technology, it is also about the team. Essential roles include Data Engineers who bridge technical expertise and strategic goals, Data Scientists who analyze large amounts of data, and AI Architects who define the vision for AI usage and coordinate the team. While AI tools supported us with “simple” routine tasks and creative impulses, they could not replace the constructive exchange and close collaboration within the team.
Gather feedback and improve
At the end of this first phase, we shared an internal beta version of the Semantic Search with our colleagues. This allowed us to gather valuable feedback in order to plan our next steps. The enthusiasm for further development is high, fueling our motivation to continue.
What’s next?
Our journey in AI research has only just begun, but we have already identified important milestones. Many exciting questions lie ahead: Which model will best suit our long-term needs? How do we make the results accessible to users?
Our team continues to grow – in expertise, members, and visions. Each milestone brings us closer to our goal: integrating the full potential of AI into our work.
Industry 4.0 promises more efficient and sustainable manufacturing processes through digitalization. The foundation for this is a seamless, automatic exchange of information between systems and products. This is where the Asset Administration Shell (AAS) comes into play.
An Asset Administration Shell is a vendor-independent standard for describing digital twins. Basically, it is the digital representation of an asset; either a physical product or a virtual object (e.g., documents or software).
The AAS defines the appearance of the asset in the digital world. It describes which information of a device is relevant for communication and how this information is presented. This means the AAS can provide all important data about the asset in a standardized and automated way.
Let us take a look at a practical application to understand the benefits of an AAS:
Use case: AAS as enabler for new services
As part of the ESCOM research project, CONTACT Software collaborates with GMN Paul Müller Industrie GmbH & Co. KG to implement AAS-based component services. The family-run company manufactures motor spindles which are installed by its customers as components in metalworking machine tools and then resold.
Before the project began, GMN had already developed a new sensor technology. It enables deep insights into the behavior of a spindle and provides information on overall operation of the spindle system. The company wants to use this data to offer new, product-related services:
Certified commissioning: Before GMN ships its spindles, the components are put through a defined test cycle on the company’s in-house test bench. GMN uses the data from this reference cycle to ensure that motor spindles are installed and commissioned correctly at the customer’s facility.
Predictive services: Using the IDEA-4S sensor microelectronics, customers shall be able to continuously record and analyze operating data that provide insights into the availability and operation of the spindles. If necessary, the data can be shared with GMN, for example, for problem analysis. This saves valuable time until the machine is back up and running. In the future, GMN will be able to offer smart predictive services like predictive maintenance.
About GMN Paul Müller Industrie GmbH
GMN Paul Müller Industrie GmbH & Co. KG is a family-owned mechanical engineering company based in Nuremberg, Germany. It produces high-precision ball bearings, machine spindles, freewheel clutches, non-contact seals, and electric drives that are used in various industries. The company manufactures most of these components individually for its customers on site and sells its products via a global sales network.
How do we realize the new services?
To provide such services, companies must be able to access and analyze the sensor data of their machines. Furthermore, machines (or their components) must be enabled to communicate independently with other assets and systems on the shopfloor.
For both tasks, GMN uses CONTACT Elements for IoT. The modular software not only helps the company to record, document and evaluate the reference and usage data of their spindles. It also includes functions that enable users to create, fill and manage the AAS for an asset.
Background
During the implementation of services based on spindle operating data, GMN benefits from the cooperation with a customer. This company installs the spindles in processing machines that GMN uses to manufacture its own products. As a result, GMN can gather the operating data in-house and use it to improve the next generation of spindles.
What role does the AAS play?
For the components to exchange information in a standardized form, an AAS must be created for the spindle at item and serial number level. This is also done using CONTACT Elements for IoT. The new services are mapped in a so-called AAS metamodel. It serves as a “link” to the service offers.
AAS and submodels
The AAS of an Industry 4.0 component consists of one or more submodels that each contain a structured set of characteristics. These submodels are defined by the Industrial Digital Twin Association (IDTA), an initiative in which 113 organizations from research, industry and software (including CONTACT Software) collaborate to define AAS standards. A list of all currently published submodels is available at https://industrialdigitaltwin.org/en/content-hub/submodels.
In CONTACT Elements for IoT, GMN can populate the AAS submodels with little effort. The platform includes a widget developed as a prototype during the research project. It provides an overview of which submodels currently exist for the asset and which are available but not yet created. Through the frontend, users can jump directly to the REST node server and upload or download submodels (in AAS/JSON format).
During the implementation of data-driven service offerings, GMN focuses on the submodels
Time Series Data (e.g., semantic information about time series data)
Digital Nameplate (e. g., information about the product, the manufacturer’s name, as well as product name and family),
Contact Information (standardized metadata of an asset) and
Carbon Footprint (information about the carbon footprint of an asset)
Filling the submodels is simple. This is demonstrated by the module Time Series Data. During the reference run of a motor spindle on the in-house test bench, the time series data is recorded by CONTACT Elements for IoT. The platform automatically transfers this data to the AAS submodel of the motor spindle being tested. At the same time, the platform creates a document for the reference run. This allows GMN to track its validity at any time and make it available to external stakeholders.
New services on the horizon
Using Asset Administration Shells allows GMN to realize its service ideas. This currently concerns the commissioning service and automated quality assurance services.
By analyzing the spindle data, the company can identify outliers in the operating data and make suitable recommendations for action. For example, different vibration velocities indicate an incorrect installation of the spindle in the machine or that time-varying processes are occurring. The analysis can also be used to provide insights about anomalies in operating behavior.
Dashboards in CONTACT Elements for IoT increase transparency. They provide GMN with all relevant information about the spindles on the test bench, from 3D models to status data. This overview is extremely valuable, particularly for quality management.
Summarized
Asset Administration Shells are vendor-independent standards for describing digital twins. They are among the most important levers for implementing new Industry 4.0 business models, as they enable communication between assets, systems, and organizations. The example of GMN demonstrates the practical benefits of the AAS. The company uses it to design new, product-related services based on information from the AAS of its products. GMN can successively improve these services by continuously analyzing operating data in CONTACT Elements for IoT.