In order to digitally compensate for image errors, the impaired areas of every single ingoing image are first identified algorithmically and marked accordingly. To this end, neural networks have been trained to learn the relation between image sections with and without errors. In the next step, faulty image sections are reconstructed in the overall image by the neural network, and then replaced by the reconstructions.

Taking a variety of network architectures, the EDAG Group developers selected the most suitable ones, and achieved the best results with a combination of Partial Convolutional Neural Network (CNN) and Recurrent Convolutional Neural Network (RCNN). The use of an RCNN inspired by long and short-term memory enables the software to use information from previous and subsequent images for reconstruction. The deep-learning approach permits the robust reconstruction of image errors, which can take a variety of different forms.

The network architecture selected also makes it possible to abstract information from objects and scenarios that have been previously seen, and to recognise basic connections. For example, hidden objects in a single image can be reconstructed on the basis of empirical values from previous images.

These empirical values are collected in the training process by analysing millions of different images; the correctness of the abstraction is constantly checked during the training process.

The EDAG Group has optimised the software for the widely used Nvidia autonomous platforms. As a result, users benefit from high-performance algorithms in combination with a standard hardware platform for embedded systems.

The DiFoRem system increases the availability and robustness of camera-based signals, so improving the quality of input data from current driver assistance systems or automated driving functions.

DiFoRem is compatible with various different hardware components such as rear view, front view, top view or surround view camera systems.