Research

Efficient Machine Learning
As machine learning models—especially large language models—grow in size and complexity, so do their computational and environmental costs. My research in efficient machine learning is motivated by the need to make these powerful models more practical, sustainable, and accessible. I explore strategies such as model compression, pruning, and low-rank approximations to reduce resource requirements while preserving performance. The overarching goal is to design scalable learning systems that retain their capabilities across a wide range of devices and deployment settings.

Selected Publications:

Interpretable Machine Learning
In high-stakes domains like healthcare, law, and finance, understanding why a machine learning model makes a certain prediction is just as important as the prediction itself. My research in interpretable machine learning is driven by the challenge of making complex models more transparent and trustworthy. I am particularly interested in methods that can identify and explain which features are most relevant for individual predictions, enabling personalized and context-aware insights. This line of work aims to bridge the gap between model performance and human understanding, empowering users to make informed decisions based on model outputs.

Selected Publications: