IoT failures and trust in technology

At the beginning of April this year I attended the building IoT in Cologne. At the conference, which was organized by heise developer, iX and d.punkt publishing house, everything revolved around applications for the Internet of Things (IoT) and Industry 4.0 in lectures and an exhibition. Together with my colleague Yang Zhong, I presented modern user experience concepts (UX) for IoT solutions in a lecture.

At the end of our presentation, which showed a user’s work processes, from the data acquisition of a real “Thing” to the visualization of live data in the dashboard using a Digital Twin, there was a very stimulating discussion. Two points were particularly interesting here:

  • In many application areas, the topic of customer journeys is high on the agenda – which confirms the current trend.  
  • It is essential to develop software for users – which was also a consensus.

The evening was dedicated to Industrial IoT. As a moderator, I hosted a discussion with representatives from various enterprises and software companies, such as Miele, Dürr Dental, Codecentric or akquinet. An intensive discussion around the predominant topics of the industry 4.0 took place here. In addition to the choice of the control electronics or the wireless standard, this also includes questions as to whether an IoT solution should be operated in a cloud. The reasons for solutions in a cloud are of course the convenience and the relatively efficient and simple scalability with regard to the number of “things” to be managed. On the other hand, managing the software on your own servers (on-premise) means that confidential product or customer data really won’t leave your premises. The discussion has confirmed my assessment that both approaches have their advantages in practice and are applied accordingly.

One of my personal highlights at this year’s building IoT was a negative hit list of IoT products, so-called IoT failures: products that have massive security gaps, such as open data interfaces. Some “classic” vulnerabilities were already known, such as unaltered standard passwords that allow data misuse. Others gaps really surprised me: such as a smoke detector of a well-known brand, which is already equipped with a microphone (?!) as standard, which in turn allows unwanted monitoring in living rooms.

Why is there a microphone in a smoke detector?  We can’t say that for sure, at least it’s not in the customer’s interest and causes a massive loss of trust in technology. And that is precisely the point: acceptance of new technologies requires trust. And this is becoming more and more important with increasing digitalization.

20 years of PLM: Why do many still doubt the benefits?

In the meantime, I can look back on several years of consulting for Product Life Cycle Management. A topic whose popularity has fluctuated considerably over the years and is currently on the rise again in the wake of digital transformation.

Despite the increasing attention for PLM again, I notice that the term continues to have a large, cumbersome, tedious, and uneconomical taste. Amazing, because the effort that many companies put into ERP projects, for example, was and is significantly higher in most cases. Nevertheless, the necessity and benefits of – expensive – ERP projects are discussed, but rarely questioned, see Haribo and Lidl.

How do these different perceptions come about? One explanation could be that the benefits of PLM for management and employees in companies have not been sufficiently exploited over the years. This was mainly due to the fact that the scope and visibility of PLM projects in companies was often very limited.

A closer look shows that many of the earlier PLM implementations were in fact PDM implementations. PDM, Product Data Management, focuses on product descriptive data, primarily CAD models and drawings. “PLM” was therefore limited to the core areas of product development, very often even to Mechanical Design. Although beeing avilable in some PLM solutions for years, Change Management, Document Management, Project Management, cross-departmental collaboration or communication with external parties have not been used. Instead, solutions based on Excel, Outlook, the file system or SharePoint were often created on their own. Tools that everyone in the company knows. And for those one can very easily find someone to “optimize” these tools by macro programming. In addition to that, the negative attitude towards PLM was certainly fuelled by the overloaded, highly compressed “engineering user interfaces” of the 1st and 2nd PLM product generations.

So it’s no surprise that PLM was seen in the company as an expensive, less useful and exotic application!

In the current PLM renaissance, companies now have every opportunity to learn from the deficits of the past and to take advantage of the impressive potential of Product Lifecycle Management. Many obsolete and discontinued PDM and PLM solutions are currently or soon to be replaced by modern 3rd generation PLM platforms, which also support the use cases around the Digital Twin and the Internet of Things. They breathe life into the PLM idea by effectively and efficiently supporting processes across phases, departments and company boundaries. New, web-based HTML-5 user interfaces significantly increase acceptance among all user groups in the company by making even complex relationships clearer and handling them more efficient.

Now there is a chance to realize “real” Product Lifecycle Management! Against the background of new, digital business models, which put the use phase of products much more in the foreground, this becomes all the more important. PLM solutions play a central role here, as they lay the foundation for data relating to the Digital Twin.

But in the end, hard facts also count when it comes to benefits and ROI: If PLM is actually used company-wide with all its possibilities, high economies of scale quickly result from the significant minimization of non-value-adding activities. This alone often enables a return on investment after just one year. Regardless of the additional revenue potential from new, data-driven business models that PLM will enable in the future.

Are data science platforms a good idea?

According to Karl Valentin: Platforms are beautiful and take a lot of work off your neck. The idea of platforms for automatic data analysis comes at just the right time. In line with this, Gartner has now published a “Magic Quadrant for Data Science and Machine Learning Platforms“. The document itself can only be viewed behind a paywall, but on the net some of the companies mentioned in the report offer access to the document by entering the address.

Gartner particularly emphasizes that such a platform should provide everything you need from a single source, unlike various individual components that are not directly coordinated with each other.

Sounds good to me! However, data science is not an area where you can magically get ahead with a tool or even a platform. The development of solutions – for example, for predictive maintenance of the machines offered by a company – goes through various phases, with cleaning/wrangling and preprocessing accounting for most of the work. In this area, ETL (Extract, Transform, Load) and visualization tools such as Tableau can be ranked. And beyond the imaginary comfort zone of platforms that managers imagine, database queries and scripts for transformation and aggregation in Python or R are simply the means of choice. A look at data science online tutorials from top providers like Coursera underlines the importance of these – well – down-to-earth tools. “Statistical analysis, Python programming with NumPy, pandas, matplotlib, and Seaborn, Advanced statistical analysis, Tableau, machine learning with stats models and scikit-learn, deep learning with TensorFlow” is one of Udemy’s course programs.

In addition, the projects often get stuck in this preliminary stage or are cancelled. There are many reasons for this:

  • no analytical/statistical approach can be found
  • the original idea proves to be unfeasible
  • the data is not available in the quantity or quality you need
  • simple analyses and visualizations are enough and everything else would be “oversized”.

This is no big deal, as it only means that the automated use of Machine Learning and AI does not make a data treasure out of every data set. If, however, the productive benefit becomes apparent, it is necessary to prepare for the production pipeline and time or resource constraints. Usually you start from scratch and reproduce everything again, e.g. in Tensorflow for neural networks or in custom libraries.

The misunderstanding is that a) Data Science can be driven up to productive use without a trace and b) a one-stop-shop for Data Science (here “platform”) is needed that does everything in one go. That will never happen.

This is really good news, because it means that organizations can achieve their first goals without having to resort to large platforms. The reasonably careful selection of suitable tools (many of them open source) helps to achieve this.

Also interesting:
In my video “AI Needs Strategy” I explain which steps companies can take to to use AI technology in a successful way.