Imaging Informatics, Machine Learning, Integrated Diagnostics
In the current data-rich healthcare environment, our capacity to collect vast amounts of longitudinal observational data needs to be matched with a comparable ability to continuously learn from the data and enable individually tailored medicine. The Hsu Lab, led by William Hsu PhD, focuses on the systematic integration of information across different data sources to improve the performance and robustness of clinical prediction models. Dr. Hsu directs the Integrated Diagnostics Shared Resource, which is an interdepartmental resource that prospectively collects clinical, imaging, and molecular data to improve the detection and characterization of early-stage cancer. He also leads a team of postdoctoral fellows and graduate students who are developing computational tools that harness clinical, imaging, and molecular data to aid physicians with formulating timely, accurate, and personalized management strategies for individual patients. We adapt and validate novel artificial intelligence/machine learning algorithms, translating them into applications that enable precision medicine. His team works on problems related to data wrangling, knowledge representation, machine learning, and interpretation. They utilize a wide spectrum of approaches from statistical approaches to machine and reinforcement learning, depending on the problem at hand. Dr. Hsu also oversees a team of software developers and analysts who harden and translate research products into real-world applications that improve the practice of radiology.
- Develop machine learning approaches for discovering optimal care pathways for individuals
- Build software tools and algorithms to enable integrated diagnostics research
- Use machine learning techniques to integrate clinical, imaging, and molecular data for integrated diagnostics
- Improve methods for evaluating and adopting machine learning models in clinical practice
Using machine and reinforcement learning to optimize medical decision making.Physicians routinely face the challenge of reasoning on complex, multimodal data under uncertainty. Algorithms are needed to discover: what information is relevant to making a decision; and what test/treatment optimizes the outcome for a given patient. I am a co-principal investigator of a National Science Foundation grant under the Smart & Connected Health program. In this project, I investigate machine and reinforcement learning-based approaches to determine the timing and sequence of diagnostic tests towards minimizing overdiagnosis and overtreatment. Along with my collaborators, I have explored the application of inverse reinforcement learning as a data-driven method to learn rewards for instantiating a partially-observable Markov decision process (POMDP) model to determine the optimal timing and action (follow-up imaging versus biopsy) for individuals undergoing cancer screening. Our initial work has focused on developing and validating a POMDP using data from the National Lung Screening Trial. Our model had better specificity while also detecting some cancers earlier (citation). The goal is to provide additional information to these sequential models about how suspicious findings change over time with the hope of further improving their specificity and ability to identify more clinically significant cancers earlier.
Modeling and interpreting multimodal data for integrated diagnostics. Biomedical imaging provides a multi-scale, in vivo characterization of disease progression and can be collected in a non-invasive and longitudinal manner. I am developing modeling approaches to uncover relationships between imaging and genomic features (i.e., radiogenomic associations). Our recent Bioinformatics paper (citation) presented a neural network-based approach to discover the nonlinear mapping between high-dimensional gene expression data and imaging traits. Integrating data from The Cancer Genome Atlas and Cancer Imaging Archive on patients with glioblastoma, we trained a deep neural network to predict the appearance of various imaging traits (e.g., extent of edema) given the expression levels for 12,042 genes. We adapted post-hoc model interpretability methods such as class saliency and input masking to discover combinations of genes that are most influential in influencing tumor appearance. I am currently extending this work to prostate cancer. In this collaborative effort with Holden Wu, Steven Raman, and investigators from Pathology and Human Genetics, we combine clinical, multiparametric imaging, and genomic information to improve the identification of men with aggressive prostate cancer. We assess different multimodal data fusion methods, including our neural network-based approach, to predict early biochemical recurrence.
William Hsu, PhD
Associate Professor of Radiological Sciences, Director of the Integrated Diagnostics Shared Resource, and member of the Medical & Imaging Informatics group. Dr. Hsu is a biomedical informatician whose research focuses on the systematic integration of information across different data sources to improve the performance and robustness of clinical prediction models.