Edge Device

Standardised data acquisition and data processing

The edge device opens the door for both you and us to data-driven bottling line production. This is of pivotal importance in particular when it comes to the standardisation of data collection and processing for the entire supply chain.

Womens forms with both index fingers and thumbs a square in which centre a floating datastream icon is located. The woman is surrounded by further floating icons.


and its use

Over the last few years, cloud computing has become a cost-efficient alternative to internally operated computer centres. The accompanying benefits are undisputed, e.g. decreased hardware investment, automatic system updates and reduced amount of IT maintenance. However, it is not possible to transfer the quantities of data generated by production machines directly to the cloud quickly and with the necessary density/resolution. This is where so-called edge devices come in, devices which collect and process the data – with a low temporal latency– directly at the source where it is generated. This combination allows the possibilities of cloud computing to be optimally utilised. The consultancy and market research company “Gartner” confirmed this technology’s potential back in 2017 with the sentence: “The Edge will eat the Cloud.”*


The Edge Device

and its requirements

An edge device comprises local intelligence which can collect data from on-premise systems and then process it. By connecting the edge device with the cloud, a permanently sustained internet connection is the prerequisite for the operation of these systems.

This results in the following key requirements:

  • Secure connection to the cloud infrastructure
  • Protection against local attackers
  • High demands on IT security in the product and process
  • Ability to update the system


as the basis for predictive approaches

The use of artificial intelligence (AI) to achieve the required predictive-use cases is required on multiple levels. The greatest challenge when using an AI pipeline is the transition from a proof of concept (PoC) to operative production. It is crucial that the entire data processing chain is standardised, meaning everything from the data acquisition on the edge device and the data format through to the data storage in the data warehouse. 
Also in the field of machine learning, AI models continually need to be fed with identically structured data if the required quality is to be ensured. Due to the fact that these models need to be executed on an edge device because of their low latency time and reduced cloud data transfer, newly trained models still need to be seamlessly transferred to and updated on the edge device. These models can only be used in productive mode if this processing chain is closed.