Building a Semantic Search: Insights from the start of our journey

Research in the field of Artificial Intelligence (AI) is challenging but full of potential – especially for a new team. When CONTACT Research was formed in 2022, AI was designated as one of four central research areas right from the start. Initially, we concentrated on smaller projects, including traditional data analysis. However, with the growing popularity of ChatGPT, we shifted our attention to Large Language Models (LLMs) and took the opportunity to work with cutting-edge tools and technologies in this promising field. But as a research team, one critical question emerged: Where do we get started?

Here, we share some of our experiences which can serve as guidance to others embarking on their AI journey.

The beginning: Why similarity search became our starting point

From the outset, our goal was clear: we wanted more than just a research project, we aimed for a real use case that could ideally be integrated directly into our software. To get started quickly, we opted for small experiments and looked for a specific problem that we could solve step by step.

Our software stores vast amounts of data, from product information to project details. Powerful search capabilities make a decisive difference here. Our existing search function did not recognize synonyms or natural language, sometimes missing what users were really looking for. Together with valuable feedback, this quickly led to the conclusion that similarity search is an ideal starting point and should therefore be our first research topic. An LLM has the power to elevate our search functionality to a new level.

The right data makes the difference

Our vision was to make knowledge from various sources such as manuals, tutorials, and specifications easily accessible by asking a simple question. The first and most crucial step was to identify an appropriate data source: one large enough to provide meaningful results but not so extensive that resource constraints would impede progress. In addition, the dataset needed to be of high quality and easily available.

For the experiment, we chose the web-based documentation of our software. It contains no confidential information and is accessible to customers and partners. Initial experiments with this dataset quickly delivered promising results, so we intensified the development of a semantic search application.

What is semantic search?

In short, unlike the classic keyword search, semantic search also recognizes related terms and expands queries to include contextually-related results – even if these are phrased differently. How does this work? In our first step with semantic indexing, the LLM converts the content of source texts into vectors and saves them in a database. Search queries are similarly transformed into vectors, which are then compared to stored vectors using a “nearest neighbor” search. The LLM returns the results as a sorted list with links to the documentation.

Plan your infrastructure carefully!

Implementing our project required numerous technical and strategic decisions. For the pipeline that processes the data, LangChain best met our requirements. The hardware also poses challenges: for text volumes of this scale, laptops are insufficient, so servers or cloud infrastructure are required. A well-structured database is another critical factor for successful implementation.

Success through teamwork: Focusing on data, scope, and vision

Success in AI projects depends on more than just technology, it is also about the team. Essential roles include Data Engineers who bridge technical expertise and strategic goals, Data Scientists who analyze large amounts of data, and AI Architects who define the vision for AI usage and coordinate the team. While AI tools supported us with “simple” routine tasks and creative impulses, they could not replace the constructive exchange and close collaboration within the team.

Gather feedback and improve

At the end of this first phase, we shared an internal beta version of the Semantic Search with our colleagues. This allowed us to gather valuable feedback in order to plan our next steps. The enthusiasm for further development is high, fueling our motivation to continue.

What’s next?

Our journey in AI research has only just begun, but we have already identified important milestones. Many exciting questions lie ahead: Which model will best suit our long-term needs? How do we make the results accessible to users?

Our team continues to grow – in expertise, members, and visions. Each milestone brings us closer to our goal: integrating the full potential of AI into our work.

For detailed insights into the founding of our AI team and on the Semantic Search, visit the CONTACT Research Blog.

ISO 27001 Certification: security as a standard for our cloud products

Digitalization is shaping our lives and workplaces like never before. With this evolution comes an increased responsibility to protect data effectively and ensure stable service delivery. Information security is no longer a “should” but an absolute “must.”

As a provider of industrial software solutions from the cloud, quality, security, and reliability are our top priorities. We are delighted to announce our successful ISO 27001 certification by Datenschutz Cert. This confirms our commitment to providing products that meet the highest security standards and effectively protect data.

More security, efficiency, and sustainability with automation

Our goal was clear from the beginning: to meet security and stability requirements with innovative technologies. We rely heavily on automation and Infrastructure as Code (IaC) to achieve this. These measures enable us to implement security mechanisms effectively and integrate them seamlessly into our development and operating processes.

One crucial aspect of our preparations was to take climate risks into account. Events like extreme weather pose potential threats to IT infrastructures. In response, we developed solutions that minimize risks while enhancing efficiency – such as monitoring tools and automated scaling. These technologies reduce our carbon footprint and help to ensure a high level of security and sustainability.

Security culture as a success factor

Information security is more than just meeting standards—it is an integral part of our corporate culture. Principles such as high availability, automation, and the use of a single source of truth define how we work and foster a structured approach to tackling complex challenges. A standout aspect is the contribution of our team. Regular training and a high level of security awareness ensure that information security is not just seen as a task for IT, but is practiced throughout the entire company. This holistic mindset was a cornerstone of our journey to achieving ISO 27001 certification.

Our automation strategies further illustrate how we combine efficiency with security. By standardizing processes, we reduce human error while laying the foundation for continuous improvement.

Added value for customers and partners

For our customers, certification means one thing above all: trust. ISO 27001 certification is an internationally recognized seal of quality and confirms that we adhere to the highest security standards. This not only enhances the reliability of our cloud products but also assures our customers that their data is in safe hands.

Our partners also benefit significantly from this certification. Standardized processes and clearly defined security requirements make collaboration more seamless, boost efficiency, and establish a foundation of trust for future projects. It is a crucial competitive advantage, especially in a dynamic environment like the cloud industry.

Our vision for the future

ISO 27001 certification is not an endpoint for us but a milestone in our ongoing journey to continuously enhance our security measures. For instance, we plan to make our monitoring systems even more robust, enabling us to detect potential risks more quickly and address them more effectively. The digital landscape is constantly changing – we are ready to face these challenges and ensure the security of our customers, partners, and their data.

Design decisions in minutes – how AI supports product development

Artificial intelligence (AI) is a hot topic and increasingly important in product development. But how can this technology be effectively integrated into development projects? Together with our client Audi, we put it to the test and examined the potential and challenges of a machine learning (ML) application – a subset of AI – in a real project. For this purpose, we chose a crash management system (CMS). It is both simple enough to achieve a meaningful result and complicated enough to adequately test the general applicability of the ML method.

Expertise as the Key

ML can only be effectively utilized to the extent the underlying data foundation allows. Therefore, the expertise of the professionals involved plays a critical role. For example, design engineers enter their knowledge of manufacturing and spatial constraints, usable materials, and dependencies into the CAD model. Calculating engineers share their expertise on the simulation process, while data scientists assist with sampling and evaluation.

The creation of thousands of design and corresponding simulation models, as required for the use of Machine Learning (ML), presents a tremendous challenge without automation. The FCM CAT.CAE-Bridge, a specially developed plug-in for CATIA, enables seamless automation across all process steps. Additionally, it embeds all simulation-relevant information (material, properties, solver, and more) directly into the CAD model. The fully automatic translation into a simulation file is done via tools such as ANSA or Hypermesh.

Automated process: Sampling, DoE, model creation, simulation, evaluation with subsequent training of the ML models. (© CONTACT Software]

Precise Linking of Parameters and Results

Our approach ensures that the relationship between the CAD model and the simulation model is fully preserved. The automated calculation and evaluation of the models based on specific results create an excellent data foundation for the ML process. The vectors of input parameters with corresponding result values form the basis for the ML approach—clear and comprehensive.

Input parameters (blue) identified based on constrained result vectors (red) that meet the requirements. (© CONTACT Software)

With the trained models and their known accuracy, parameter variations can be quickly tested, and the impact on behavior can be derived—literally within minutes. Once the optimal parameters are identified, they are automatically transferred to the CAD model and the design process can continue.

Conclusion

Our project demonstrated that ML is a valid method for design engineering. The combination of parametric CAD models, simulation, and machine learning provides an efficient approach to making design decisions quickly and accurately. The prerequisite for this is a robust database and the collaboration of the relevant experts on the model. The successful results from the Audi project demonstrate the potential of our data-based approach for product development.