Digital authenticity: how to spot AI-generated content

In today’s digital age, we often question whether we can trust images, videos, or texts. Tracing the source of information is becoming more and more difficult. Generative AI accelerates the creation of content at an incredible pace. Images and audio files that once required a skilled artist can now be generated by AI models in a matter of seconds. Models like OpenAI’s Sora can even produce high-quality videos!

This technology offers both opportunities and risks. On the one hand, it speeds up creative processes, but on the other hand, it can be misused for malicious purposes, such as phishing attacks or creating deceptively real deepfake videos. So how can we ensure that the content shared online is genuine?

Digital watermarks: invisible protection for content

Digital watermarks are one solution that helps verify the origin of images, videos, or audio files. These patterns are invisible to the human eye but can be detected by algorithms even after minor changes, like compressing or cropping an image, and are difficult to remove. They are primarily used to protect copyright.

However, applying watermarks to text is way more difficult because text has less redundancy than pixels in images. A related method is to insert small but visible errors into the original content. Google Maps, for instance, uses this method with fictional streets – if these streets appear in a copy, it signals copyright infringement.

Digital signatures: security through cryptography

Digital signatures are based on asymmetric cryptography. This means that the content is signed with a private key that only the creator possesses. Others can verify the authenticity of the content using a public key. Even the smallest alteration to the content invalidates the signature, making it nearly impossible to forge. Digital signatures already ensure transparency in online communication, for example through the HTTPS protocol for secure browsing.

In a world where all digital content would be protected by signatures, the origin and authenticity of any piece of media could be easily verified. For example, you could confirm who took a photo, when, and where. An initiative pushing this forward is the Coalition for Content Provenance and Authenticity (C2PA), which is developing technical standards to apply digital signatures to media and document its origin. Unlike watermarks, signatures are not permanently embedded into the content itself and can be removed without altering the material. In an ideal scenario, everyone would use digital signatures – then, missing signatures would raise doubts about the trustworthiness of the content.

GenAI detectors: AI vs. AI

GenAI detectors provide another way to recognize generated content. AI models are algorithms that leave behind certain patterns, such as specific wording or sentence structures. Other AI models can detect these. Tools like GPTZero can already identify with high accuracy whether a text originates from a generative AI model like ChatGPT or Gemini. While these detectors are not perfect yet, they provide an initial indication.

What does this mean for users?

Of all the options, digital signatures offer the strongest protection because they work across all types of content and are based on cryptographic methods. It will be interesting to see if projects like C2PA can establish trusted standards. Still, different measures may be needed depending on the purpose of ensuring the trustworthiness of digital content.
In addition to technological solutions, critical thinking remains one of the best tools for navigating the information age. The amount of available information is constantly growing; therefore, it is important to critically question, verify, and be aware of the capabilities of generative AI models.

For a more comprehensive article, check out the CONTACT Research Blog.

Data migration to Cloud PLM systems

Challenges and best practices for successful data migration 

More and more companies are adopting cloud-based PLM systems to streamline their product development processes. Whether they are already using an on-premises PLM system and want to switch to a cloud solution or implementing a Cloud PLM system for the first time, one of the biggest challenges is the smooth and secure data migration.
How can this data be reliably transferred to the new system? In this blog post, we examine the challenges and best practices for successful data migration to Cloud PLM systems and offer tips on ensuring a smooth transition without data loss.

What challenges arise during data migration to Cloud PLM systems?

Migrating data to Cloud PLM systems, obstacles can present hurdles that complicate and delay the entire process:

  1. Data quality and consistency
    Legacy data is often incomplete or inconsistent. Missing attributes, invalid values, or duplicate records can hinder the migration process. Particularly with CAD models, missing files or broken references may prevent models from being imported completely
  2. Data scope and complexity
    Depending on the scope and complexity of the data being transferred, the migration process can be very time-consuming. Large datasets, such as entire version histories of CAD data or multi-level BOMs, require significant computing resources and can slow down the migration.
  3. Structural differences between systems
    Data structures in the new Cloud PLM system may differ from those in your legacy system. Attributes, data fields, or relationships between records may be organized differently, requiring data transformation or restructuring before import.
  4. Technical challenges
    Migrating data to a Cloud system brings specific technical issues. For example, along with ensuring file format compatibility, sufficient network bandwidth and data transfer rates must be guaranteed.
  5. Security and compliance requirements
    Strict security and compliance regulations must be followed when transferring sensitive data to the Cloud. Data must be encrypted during transport and storage, and data protection laws such as GDPR must be adhered to.

What key questions should you address before data migration?

Data migration is often underestimated, although it is one of the most critical tasks before a new PLM system goes live. You should address several key questions early to import your legacy data successfully.

First, determine which data objects will be transferred to the new system: Are you migrating CAD assemblies, parts and BOMs, office documents, or projects? It’s also essential to define the scope of the data: Do you want to migrate data from a specific project, a product, a specific company location, or the entire data archive?

You should also decide how much historical data you want to migrate. Do you want to transfer only the latest version or all versions, including the complete audit trail and engineering changes? These aspects are crucial as they influence the scope and complexity of the migration.

You should also carefully examine the content of the data itself. Consider whether all attribute values and CAD parameters are needed or if it’s sufficient to import only some of them. This is important to define which data should be stored in which objects and attributes in the target PLM system.

What makes data transfer with CIM Database Cloud so simple?

  1. User-friendly import tools
    The cloud-based PLM system CIM Database Cloud offers powerful, easy-to-use import tools specifically designed to simplify the migration process. They allow you a quick and efficient import of configuration data such as field selection values (e.g., dropdown fields), as well as PLM data such as CAD documents, parts, BOMs, office documents, projects, and requirement specifications.
  2. Support for various file formats
    CIM Database Cloud supports a wide range of file formats and data sources, making it easy to  import different data objects. These include Excel files, CAD formats, and the ReqIF format for requirement specifications.
  3. Automated validation processes
    CIM Database Cloud includes built-in validation mechanisms that help identify and correct potential errors during the import process. These functions automatically check whether the data is complete and consistent during import, contributing to high data quality.
  4. Iterative Migration Approach
    The platform supports an iterative migration approach, allowing you to import and test data step by step. This helps identify and resolve potential issues early on, without affecting the migration process. This approach reduces the risk of errors and accelerates data migration.
  5. Comprehensive Documentation and Support
    Alongside the migration process, CIM Database Cloud offers extensive documentation and tutorials. These contain clear instructions and examples on how to import and configure different data types. Additionally, customer success managers are available to assist if needed.

Conclusion

Data migration to cloud-based PLM systems is often fraught with many challenges. Successful data migration, therefore, requires careful planning, considering aspects such as data quality, scope, structural differences, and security requirements.
CIM Database Cloud enables you to efficiently migrate your PLM data and make your product development processes future-proof . With user-friendly import tools, support for various data formats, automated validation processes, and comprehensive documentation, companies can ensure the seamless and secure integration of their existing data. An iterative migration approach, combined with extensive preparation, minimizes risks and guarantees a smooth transition to the new system.

Scope 3 emissions: A challenge for companies

Reducing greenhouse gas (GHG) emissions is crucial in the fight against climate change. Many companies face the challenge that indirect emissions in their value chain, so-called Scope 3 emissions, are often the largest contributors. Since these emissions fall outside the direct control of the company, they are usually the most difficult to determine (and optimize). How can companies address these central challenges within their value chains?

What are Scope 1, 2, and 3 emissions?

The Greenhouse Gas (GHG) Protocol classifies emissions into three categories: Scope 1 for direct emissions from company-owned sources, Scope 2 for indirect emissions from purchased energy, and Scope 3 for all other indirect emissions, including those from upstream and downstream processes within the value chain. Scope 3 is particularly important because it often accounts for the majority of GHG emissions. The GHG Protocol defines 15 categories of Scope 3 emissions that arise from both upstream and downstream activities. These include raw material extraction, production and transportation of purchased components, and the use of the manufactured products by end consumers. These emissions are difficult to capture as they are not directly under the company’s control.

Corporate Carbon Footprint (CCF) vs. Product Carbon Footprint (PCF)

There are two central approaches to calculating emissions: the Corporate Carbon Footprint (CCF), which encompasses all activities of a company, and the Product Carbon Footprint (PCF), which focuses on the lifecycle of a specific product. The PCF is particularly important when it comes to determining emissions along the value chain. Companies that aim to measure their Scope 3 emissions also need data from their suppliers regarding the PCF of the components they purchase.

Why is measuring Scope 3 emissions important?

Companies can directly influence and therefore more easily calculate Scope 1 and Scope 2 emissions. However, Scope 3 emissions should not be overlooked when aiming to assess the entire value chain. Since emissions from upstream and downstream processes often are the largest sources of GHGs, this is the only way to identify and reduce “hotspots” within the value chain.

For many SMEs, significant emissions lie in the upstream processes. However, this is also particularly relevant for industries that rely on complex and globally distributed supply chains. The automotive industry, for instance, depends heavily on purchased components and services, which significantly impact the GHG balance. According to the study “Climate-Friendly Production in the Automotive Industry” by the Öko-Institut e.V., an average of 74.8% of Scope 3 emissions occur during the usage phase, while in-house production (Scope 1 and 2 emissions) only accounts for about 1.9%, and 18.6% originate from the upstream value chain with purchased components. As the industry focuses more and more on e-mobility, the Scope 3 emissions of purchased components – and thus those from suppliers – come into sharper focus as a key lever.

Challenges in the supply chain

The pressure on suppliers to make their production more efficient and sustainable is growing, along with the need for transparency regarding the emissions of the supplied parts. Key challenges in the supply chain include data quality and availability. To tackle this and reduce greenhouse gas emissions, companies need to break new ground, ranging from material selection to production methods. A solid data foundation supports these necessary decisions, as well as the accurate documentation of emissions.
Capturing Scope 1 and Scope 2 emissions is already mandatory under the GHG Protocol Corporate Standard, while Scope 3 reporting is currently optional. However, the importance of Scope 3 reporting is increasing, as demonstrated by EU regulations like the Corporate Sustainability Reporting Directive (CSRD) and the associated European Standards (ESRS). These regulations emphasize the disclosure of emissions as a central aspect of climate action and sustainable business practices.

Three key steps to reduce Scope 3 emissions

  1. Optimize data management: Companies should collect comprehensive data on their products and their lifecycles to make design and portfolio decisions in favor of sustainability.
  2. Ensure data sovereignty and trust: Accurate calculation of Scope 3 emissions requires control over data, particularly in the context of the upstream and downstream value chains.
  3. Use open interfaces: Open data interfaces are essential for seamless integration and communication within the value chain. Approaches like the Asset Administration Shell (AAS) and concepts such as the Digital Product Passport (DPP) can provide valuable support.

Conclusion

Measuring and optimizing Scope 3 emissions is one of the greatest challenges for companies seeking to improve their GHG balance. By leveraging better data, optimizing collaboration within the supply chain, and ensuring transparent reporting, companies can meet regulatory requirements and make progress toward a more sustainable future.

Read a more detailed article on Scope 3 emissions on the CONTACT Research Blog.