About
News
Projects
Publications
Team
Teaching
Jobs
Contact
Light
Dark
Automatic
Posts
Bridging calibration and refinement — our latest work at AISTATS!
We present a general method for consistent and asymptotically unbiased estimation of proper calibration errors and refinement terms. Introducing the Kullback–Leibler calibration error, we reveal its connection to f-divergences and information monotonicity in neural networks.
Sebastian Gruber
Dec 14, 2025
1 min read
PDF
Insights into FLAME: Visit by Dr. Marius Herr
We hosted Dr. Marius Herr from the University of Tübingen, who presented FLAME — Federated Learning and Analyses in Medicine, a platform developed within the privateAIM initiative. He outlined how patient data can be analyzed in a privacy-preserving manner and demonstrated how our methods could be applied to clinical data through FLAME.
Bernhard Hellmann
Dec 4, 2025
1 min read
Welcome to our newest team member, Azza
Azza joins our ML sub-team as a doctoral researcher working on uncertainty quantification and estimation for trustworthy AI. She brings experience in applied machine learning and language model engineering and will support our ongoing research on reliable ML methods in oncology.
Bernhard Hellmann
Nov 17, 2025
1 min read
Our paper “Fine-Grained Uncertainty Decomposition in Large Language Models: A Spectral Approach” is now available at AAAI
This work presents Spectral Uncertainty, a new way to decompose uncertainty in large language models. Using the Von Neumann entropy, the method distinguishes aleatoric from epistemic uncertainty and incorporates detailed semantic structure in model outputs.
Nassim Walha
Nov 16, 2025
1 min read
PDF
Florian named among the world’s most cited researchers — huge congrats to the whole MLO Lab team!
A proud moment: Florian has been listed as a Highly Cited Researcher 2025, placing him among the top 1% of scientists worldwide. This recognition reflects the shared work, ideas, and energy that move our lab forward every day.
Bernhard Hellmann
Nov 14, 2025
1 min read
Welcome to our newest team member, Rashika
Rashika joins the MLO Lab as a doctoral researcher working on probabilistic machine learning for multi-omic data integration and foundation-model approaches for single-cell representation learning. She will contribute to our research on machine learning methods for cancer data analysis.
Bernhard Hellmann
Nov 10, 2025
1 min read
Welcome to the team, Hendrik
Hendrik joins our ML sub-team, working on machine learning methods that combine large-scale EHR data with other data types such as genomics. He focuses on causality and uncertainty estimation and will support our ongoing methodological work.
Bernhard Hellmann
Oct 18, 2025
1 min read
Welcome to the team, Yusuf
Yussuf joins our bioinformatics sub-team as a doctoral researcher working on interpretable probabilistic machine learning models for multimodal data integration. He applies these methods to projects on novel mRNA technologies for colorectal and pancreatic cancer and will support our ongoing research efforts.
Florian Buettner
Oct 1, 2025
1 min read
Our paper “Learning interpretable representations of single-cell multi-omics data with multi-output Gaussian processes” has been published in Nucleic Acids Research.
We present a unified framework that combines expressive neural embeddings with interpretable multi-output Gaussian processes for single-cell genomics. Joint representations of cells and genes reveal meaningful links between cell clusters and their marker genes via an interpretable gene-relevance map.
Zahra Moslehi
Aug 12, 2025
1 min read
An autonomous agent for auditing and improving the reliability of clinical AI models — now published.
We introduce ModelAuditor, a self-reflective agent that simulates clinically relevant distribution shifts and produces interpretable reports on likely failure modes. Across multiple medical imaging domains, it recovers up to 25% of performance lost under shift while providing actionable deployment insights.
Lukas Kuhn
Jul 8, 2025
1 min read
PDF
»
Cite
×