Trustworthy Machine Learning in Biomedical Research
As machine learning becomes increasingly central to biomedical discovery and clinical decision-making, ensuring the reliability, fairness, and interpretability of these models is critical. In our lab, we are committed to developing and applying machine learning methods that are not only accurate but also trustworthy, meaning they are robust to noise, generalizable across datasets, transparent in their decision-making, and aligned with ethical and clinical standards.
Our work spans multiple aspects of trustworthy ML, including uncertainty quantification, model calibration, interpretability, fairness in predictive models, and robustness to distributional shifts. These components are especially important in healthcare, where decisions influenced by models can have direct consequences for patients.
In the context of multi-omics data, single-cell analysis, and quantitative imaging, we embed trustworthiness principles throughout the model development pipeline, from data preprocessing and integration to prediction and interpretation. This ensures that our computational outputs can be confidently used to guide biological insight and translational applications.
Ongoing Projects
To be completed: This section will describe specific methods, tools, or case studies currently under development in the lab that focus on trustworthy machine learning.