Staff:

Dr. Matthias Mnich, TU Hamburg Harburg
 

Description:

The main research goal of this project is the quest for a rigorous mathematical theory of input-output efficient preprocessing. This new theory will develop the computational tools to design powerful algorithms for preprocessing very large instances of hard problems, that very efficiently compress those instances to smaller ones with guaranteed size. Our motivation is the incapability of current preprocessing routines with compression guarantee (kernelizations) to handle very large instances that do not fit into memory. The theory also seeks to rigorously explain the practical successes of preprocessing very large instances by algorithms without compression guarantee (heuristics), and will lead to a concept of computational intractability to explain the limitations of heuristics.

The project aims to design preprocessing algorithms that harness the full capabilities of advanced processor technology and memory hierarchies of computing hardware in science and industry, to efficiently compress big data sets. With new multivariate computational models that utilize instance structure and hardware structure at the same time, we will deepen the understanding of the mathematical origins of compressibility and serve to build more powerful algorithms for preprocessing massive data sets.

We use cookies on our website. Some of them are essential for the operation of the site, while others help us to improve this site and the user experience (tracking cookies). You can decide for yourself whether you want to allow cookies or not. Please note that if you reject them, you may not be able to use all the functionalities of the site.

Ok