MATTES MOLLENHAUER

[google scholar] [github] [x]

mattes.mollenhauer (at) merantix-momentum.com

About

I am a Senior Machine Learning Researcher at Merantix Momentum. I am currently interested in information theory with applications in efficiency, robustness and compression of learning systems as well as inference and hypothesis testing in high dimensions.

I obtained my PhD in Mathematics in 2022 under the supervision of Christof Schütte in the Biocomputing group at Freie Universität Berlin. In my dissertation, I investigated nonparametric models for Markov processes. During my PhD program, I spent some time doing R&D in the aircraft engine development division of Rolls-Royce Deutschland, where I worked on safety-critical learning systems for system identification and control.

During my time as a postdoctoral researcher in the group of Claudia Schillings at Freie Universität Berlin, I contributed to the mathematical theory of operator learning and conditional mean embeddings to provide guarantees for a variety of approaches in causal machine learning, reinforcement learning and uncertainty quantification.

Publications and preprints


PUBLISHED

Martin Genzel, Patrick Putzky, Pengfei Zhao, Sebastian Schulze, Mattes Mollenhauer, Robert Seidel, Stefan Dietzel, Thomas Wollmann.
"Choose Your Model Size: Any Compression of Large Language Models Without Re-Computation."
Transactions of Machine Learning Research, 2025. link

Mattes Mollenhauer, Nicole Mücke, Dimitri Meunier and Arthur Gretton.
"Regularized least squares learning with heavy-tailed noise is minimax optimal".
Advances in Neural Information Processing Systems, Vol. 39, 2025. link

Martin Genzel, Patrick Putzky, Pengfei Zhao, Sebastian Schulze, Mattes Mollenhauer, Robert Seidel, Stefan Dietzel, Thomas Wollmann.
"Compressing Large Language Models to Any Size Without Re-Computation".
ICML Workshop on Efficient Systems for Foundation Models, 2025. link

Dimitri Meunier, Zikai Shen, Mattes Mollenhauer, Arthur Gretton and Zhu Li.
"Optimal Rates for Vector-Valued Spectral Regularization Learning Algorithms".
Advances in Neural Information Processing Systems, Vol. 38, 2024. link

Zhu Li, Dimitri Meunier, Mattes Mollenhauer and Arthur Gretton.
"Towards Optimal Sobolev Norm Rates for the Vector-Valued Regularized Least-Squares Algorithm".
Journal of Machine Learning Research, 25(181):1−51, 2024. link

Andreas Bittracher, Mattes Mollenhauer, Péter Koltai and Christof Schütte.
“Optimal Reaction Coordinates: Variational Characterization and Sparse Computation”.
SIAM Multiscale Modeling & Simulation, 21:449-488, 2023. link

Zhu Li, Dimitri Meunier, Mattes Mollenhauer and Arthur Gretton.
“Optimal Rates for Regularized Conditional Mean Embedding Learning”.
Advances in Neural Information Processing Systems, Vol. 36, 2022. link

Mattes Mollenhauer, Stefan Klus, Christof Schütte and Péter Koltai.
“Kernel autocovariance operators of stationary processes: Estimation and convergence”.
Journal of Machine Learning Research, 23(327):1−34, 2022. link

Mattes Mollenhauer, Ingmar Schuster, Stefan Klus and Christof Schütte.
“Singular Value Decomposition of Operators on Reproducing Kernel Hilbert Spaces”.
Advances in Dynamics, Optimization and Computation, 109-130, Springer, 2020. link

Ingmar Schuster, Mattes Mollenhauer, Stefan Klus and Krikamol Muandet.
“Kernel Conditional Density Operators”.
23rd International Conference on Artificial Intelligence and Statistics, 2020. link

Stefan Klus, Brooke E. Husic, Mattes Mollenhauer and Frank Noé.
“Kernel methods for detecting coherent structures in dynamical data”.
Chaos: An Interdisciplinary Journal of Nonlinear Science 29.12, 2019. link


NOTES AND PREPRINTS

Patrick Putzky, Martin Genzel, Mattes Mollenhauer, Sebastian Schulze, Thomas Wollmann, and Stefan Dietzel.
"Float8@2bits: Entropy Coding Enables Data-Free Model Compression."
Preprint, 2026. link

Mattes Mollenhauer and Christian Fiedler.
"Fuk–Nagaev inequality in smooth Banach spaces: Optimum bounds for distributions of heavy-tailed martingales".
Preprint, 2025. link

Mattes Mollenhauer and Claudia Schillings.
"On the concentration of subgaussian vectors and positive quadratic forms in Hilbert spaces".
Preprint, 2023. link

Mattes Mollenhauer, Nicole Mücke and Tim Sullivan.
"Learning linear operators: Infinite-dimensional regression as a well-behaved non-compact inverse problem".
Preprint, 2022. link

Mattes Mollenhauer and Péter Koltai.
"Nonparametric approximation of conditional expectation operators."
Preprint, 2020. link


MISCELLANEOUS

"On the Statistical Approximation of Conditional Expectation Operators".
PhD thesis, Freie Universität Berlin. 2022. link