Research
Efficient Machine Learning
As machine learning models—especially large language models—grow in size and complexity, so do their computational and environmental costs. My research in efficient machine learning is motivated by the need to make these powerful models more practical, sustainable, and accessible. I explore strategies such as model compression, pruning, and low-rank approximations to reduce resource requirements while preserving performance. The overarching goal is to design scalable learning systems that retain their capabilities across a wide range of devices and deployment settings.
Selected Publications:
- Ke Bian, Lu Sun and Dengji Zhao, “Learning Compact Neural Networks via Generalized Structured Sparsity”, in Proceedings of the 27th European Conference on Artificial Intelligence (ECAI 2024), accepted, 2024, Santiago de Compostela, Spain.
- Tianxiao Cao, Lu Sun, Canh Hao Nguyen and Hiroshi Mamitsuka, “Learning Low-Rank Tensor Cores with Probabilistic ℓ0-Regularized Rank Selection for Model Compression”, in Proceedings of the 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024), accepted, 2024, Jeju, Korea.
- Jiahui Xu, Lu Sun and Dengji Zhao, “MoME: Mixture-of-Masked-Experts for Efficient Multi-Task Recommendation”, in Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2024), 2527-2531, 2024, Washington D.C., USA.
- Luhuan Fei, Lu Sun, Mineichi Kudo and Keigo Kimura, “Structured Sparse Multi-Task Learning with Generalized Group Lasso”, in Proceedings of the 26th European Conference on Artificial Intelligence (ECAI 2023), 2023, Krakow, Poland.
- Xinyi Wang, Lu Sun, Canh Hao Nguyen and Hiroshi Mamitsuka, “Multiplicative Sparse Tensor Factorization for Multi-View Multi-Task Learning”, in Proceedings of the 26th European Conference on Artificial Intelligence (ECAI 2023), 2023, Krakow, Poland.
Interpretable Machine Learning
In high-stakes domains like healthcare, law, and finance, understanding why a machine learning model makes a certain prediction is just as important as the prediction itself. My research in interpretable machine learning is driven by the challenge of making complex models more transparent and trustworthy. I am particularly interested in methods that can identify and explain which features are most relevant for individual predictions, enabling personalized and context-aware insights. This line of work aims to bridge the gap between model performance and human understanding, empowering users to make informed decisions based on model outputs.
Selected Publications:
- Luhuan Fei, Weijia Lin, Jiankun Wang, Lu Sun, Mineichi Kudo and Keigo Kimura, Multi-View Multi-Label Personalized Classification via Generalized Exclusive Sparse Tensor Factorization. Knowledge and Information Systems, accepted, 2025.
- Luhuan Fei, Xinyi Wang, Jiankun Wang, Lu Sun and Yuyao Zhang, Multi-Level Sparse Network Lasso: Locally Sparse Learning with Flexible Sample Clusters. NeuroComputing, 2025.
- Jiankun Wang, Luhuan Fei and Lu Sun, Multi-level network Lasso for multi-task personalized learning, Pattern Recognition, 2024.
- Weijia Lin, Jiankun Wang, Lu Sun, Mineichi Kudo and Keigo Kimura, “Multi-Label Personalized Classification via Exclusive Sparse Tensor Factorization “, in Proceedings of the 23rd IEEE International Conference on Data Mining (ICDM 2023), 2023, Shanghai, China.
- Jiankun Wang and Lu Sun, “Multi-Task Personalized Learning with Sparse Network Lasso”, in Proceedings of the 31st International Joint Conference on Artificial Intelligence (IJCAI 2022), 3516-3522, 2022, Vienna, Austria.
