Making Machine Learning Datasets and Models FAIR for HPC: A Methodology and Case Study

Published in IEEE TransAI 22, 2022

Recommended citation: P. -H. Lin, C. Liao, W. Chen, T. Vanderbruggen, M. Emani and H. Xu, "Making Machine Learning Datasets and Models FAIR for HPC: A Methodology and Case Study," 2022 Fourth International Conference on Transdisciplinary AI (TransAI), Laguna Hills, CA, USA, 2022, pp. 128-134, doi: 10.1109/TransAI54797.2022.00029. https://ieeexplore.ieee.org/document/9951530

Abstract:

The FAIR Guiding Principles aim to improve the findability, accessibility, interoperability, and reusability of digital content by making them both human and machine actionable. However, these principles have not yet been broadly adopted in the domain of machine learning-based program analyses and optimizations for High-Performance Computing (HPC). In this paper, we design a methodology to make HPC datasets and machine learning models FAIR after investigating existing FAIRness assessment and improvement techniques. Our methodology includes a comprehensive, quantitative assessment for elected data, followed by concrete, actionable suggestions to improve FAIRness with respect to common issues related to persistent identifiers, rich metadata descriptions, license and provenance information. Moreover, we select a representative training dataset to evaluate our methodology. The experiment shows the methodology can effectively improve the dataset and model’s FAIRness from an initial score of 19.1% to the final score of 83.0%.