Conventional image processing deals almost exclusively with single- (grey-scale) or three-channel (RGB) data. However, in various fields of technology ranging from non-destructive testing methods to modern communication or camera technology to new imaging technologies in the pharmaceutical, healthcare or food industries, the requirements have changed fundamentally in recent years and next-generation devices and systems require the analysis of data sets with a large number of colour or spectral channels. The number of channels can reach several hundred or even several hundred thousand, i.e. hyperspectral imaging creates a data cube with two spatial and at least one equivalent spectral coordinate.
In imaging mass spectrometry, for example, spectral analyses of the chemical composition of a biological sample are performed on the "pixels" of the image. If a measurement resolution of 100 x 200 pixels is set for the underlying sample, 20,000 individual spectra must be recorded - to put it simply. For each of these pixels, 10,000 to 25,000 measured values are then available, each of which describes the chemical composition of the pixel. If one includes the local context, then the data of imaging mass spectrometry can be interpreted as hyperspectral image data, which contain the information on the chemical composition of the measured sample in tens of thousands of image channels.
The current challenges of hyperspectral imaging are fundamentally different from those of classical imaging and image processing and cannot be solved without taking into account the technical process of data acquisition and the application-related concrete problem. The decisive problem here is the extraction, processing and presentation of the relevant information (including segmentation, classification, quantification) from the available data. When implementing these methods with increasingly higher-dimensional data for time-critical or real-time applications, questions of approximation and efficiency of the algorithms are also of decisive importance. The complexity of the data requires new concepts in several aspects. The conventional, pixel-oriented methods are exhausted in this area and inefficient for real data volumes. Of central importance are approaches that combine structural information in all coordinates and take into account different noise models for data errors in spatial or spectral coordinates.
In HYPERMATH, central questions of hyperspectral imaging were addressed, thus making important contributions to a technological breakthrough. Data-adapted and application-specific functions for efficient data evaluation and approximations (sparsity concepts, dictionary learning) were determined. In addition, inherent localisation problems of the underlying measurement methods (peak shifts) were mathematically determined and analysed. The procedures based on this take multi-scale structures into account in order to be able to efficiently process data sets with one trillion and more values.