Mission Statement

"AI for Science" is a new program Prof. Judy Fox co-hosted in the summer of 2023, which was funded by NSF Cyber Training. It continued in fall 2023. Eleven undergraduate students were involved in their "AI for Science" research in the fall 2023 semester. The overarching goal is to advance “AI for Science” with highlights on public health and responsible AI infrastructure. The undergraduate students engaged in real-time machine learning efforts for interdisciplinary applications, including NSF-funded work in the Global Pervasive Computational Epidemiology computer expedition. Another team science area is to use transformational AI advancements for earthquakes and financial aid. We are building a community at the University of Virginia to engage and inspire students into leaders and pioneers who understand the foundations and practical applications of data and science across various domains.

Interpretable AI for Time-Series Models

Interpreting Spatio-Temporal patterns with Self-Attention

This study uses the Temporal Fusion Transformer to forecast COVID-19 infections at the US county level, analyzing detailed temporal and spatial patterns from the self-attention to achieve superior prediction performance. By interpreting the model's learned patterns and using 2.5 years of socioeconomic and health data from 3,142 counties, this research provides valuable insights to aid effective public health decision-making. Read more

Interpreting Feature Interactions using Sensitivity Analysis

This study uses eight recent local interpretation methods on six Transformer-based time series models, comparing find the best predictor for COVID-19 cases using three years of data from 3,142 US counties. When also predicting the epidemic sensitivity to different age groups and compare the interpreted sensitivity with ground truth. The framework is also tested on other datasets for broader applicability. Read more

Windowed Temporal Saliency Analysis for Multi-Horizon Time Series Forecasting

Interpreting time series models is challenging due to the temporal dependencies between time steps and the changing significance of input features over time. This addresses these challenges by striving to provide clearer explanations of the feature interactions, and showcase the practical application of these interpretability techniques using real-world datasets and cutting-edge deep learning models. Read more

Awards

Accepted to the IEEE ICDH Conference; Awarded 3rd place. Recipient: Md Khairul Islam

Student Photo

Md Khairul Islam

CS (PhD)

Accepted through the AAAI Doctoral Consortium Track. Recipient: Md Khairul Islam

Accepted to the 2023 IEEE ICDH Conference and archived in IEEE Xplore. Recipients: Md Khairul Islam, Yingzheng Liu, Andrej Erkelens, Nick Daniello, and Judy Fox

Publications

Referred Journal Article Citations/References

Interpreting County-Level COVID 19 Infections using Transformer and Deep Learning Time Series Models

Proceedings of The IEEE International Conference on Digital Health (ICDH). This work won the NSF Student Research Competition Award (3rd Place Prize) July 2023. 2023 July 02; Volume 1 (Issue 1)

Deep Learning for Time-series plays a key role in AI for healthcare. To predict the progress of infectious disease outbreaks and demonstrate clear population-level impact, more granular analyses are urgently needed that control for important and potentially confounding county-level socioeconomic and health factors. We forecast US county-level COVID-19 infections using the Temporal Fusion Transformer (TFT). We focus on heterogeneous time-series deep learning model prediction while interpreting the complex spatiotemporal features learned from the data. The significance of the work is grounded in a real-world COVID-19 infection prediction with highly non-stationary, finely granular, and heterogeneous data.

Opportunities for enhancing MLCommons efforts while leveraging insights from educational MLCommons earthquake benchmarks efforts

Journal of Frontiers in High-Performance Computing. Located in the NSF Public Access Repository. 2023 October 23

MLCommons is an effort to develop and improve the artificial intelligence (AI) ecosystem through benchmarks, public data sets, and research. It consists of members from start-ups, leading companies, academics, and non-profits from around the world. The goal is to make machine learning better for everyone. In order to increase participation by others, educational institutions provide valuable opportunities for engagement. In this article, we identify numerous insights obtained from different viewpoints as part of efforts to utilize high-performance computing (HPC) big data systems in existing education while developing and conducting science benchmarks for earthquake prediction

Temporal Dependencies and Spatio-Temporal Patterns of Time Series Models

Accepted to the AAAI-24 Doctoral Consortium, The 38th Annual AAAI Conference on Artificial Intelligence (AAAI-24), Vancouver, Canada. 2024 February 27.

The widespread use of Artificial Intelligence (AI) has highlighted the importance of understanding AI model behavior. This understanding is crucial for practical decision-making, assessing model reliability, and ensuring trustworthiness. Interpreting time series forecasting models faces unique challenges compared to image and text data. These challenges arise from the temporal dependencies between time steps and the evolving importance of input features over time. My thesis focuses on addressing these challenges by aiming for more precise explanations of feature interactions, uncovering spatiotemporal patterns, and demonstrating the practical applicability of these interpretability techniques using real-world datasets and state-of-the-art deep learning models.

Validation, Robustness, and Accuracy of Perturbation-based Sensitivity Analysis Methods for Time-Series Deep Learning Models

Accepted to the AAAI-24 Undergraduate Consortium. The 38th Annual AAAI Conference on Artificial Intelligence (AAAI-24), Vancouver, Canada. 2024 February 27.

This work undertakes studies to evaluate Interpretability Methods for Time-Series Deep Learning. Sensitivity analysis assesses how input changes affect the output, constituting a key component of interpretation. Among the post-hoc interpretation methods such as back-propagation, perturbation, and approximation, my work will investigate perturbation-based sensitivity Analysis methods on modern Transformer models to benchmark their performances. Specifically, my work answers three research questions: 1) Do different sensitivity analysis (SA) methods yield comparable outputs and attribute importance rankings? 2) Using the same sensitivity analysis method, do different Deep Learning (DL) models impact the output of the sensitivity analysis? 3) How well do the results from sensitivity analysis methods align with the ground truth?

Interpreting Time Series Transformer Models and Sensitivity Analysis of Population Age Groups to COVID-19 Infections

Accepted to the AI for Time-Series workshop. The 38th Annual AAAI Conference on Artificial Intelligence (AAAI-24), Vancouver, Canada. 2024 February 27.

Interpreting deep learning time series models is crucial in understanding the model's behavior and learning patterns from raw data for real-time decision-making. However, the complexity inherent in transformer-based time series models poses challenges in explaining the impact of individual features on predictions. In this study, we leverage recent local interpretation methods to interpret state-of-the-art time series models. To use real-world datasets, we collected three years of daily case data for 3,142 US counties. Firstly, we compare six transformer-based models and choose the best prediction model for COVID-19 infection. Using 13 input features from the last two weeks, we can predict the cases for the next two weeks. Secondly, we present an innovative way to evaluate the prediction sensitivity to 8 population age groups over highly dynamic multivariate infection data. Thirdly, we compare our proposed perturbation-based interpretation method with related work, including a total of eight local interpretation methods. Finally, we apply our framework to traffic and electricity datasets, demonstrating that our approach is generic and can be applied to other time-series domains.

Software Packages developed or updated

Interpreting County-Level COVID-19 Infections using Transformer and Deep Learning Time Series Models

Github: Link

Interpreting Time Series Transformer Models and Sensitivity Analysis of Population Age Groups to COVID-19 Infections

Github: Link

HySec-Flow: A Scalable and Secure Framework for Data Intensive Heterogeneous Computing

Github: Link

Surrogate Simulation for Earthquake Prediction

Github: Link

Additional Research Media Published

Talk at the Virginia booth, "Interpretable AI using Transformer and Deep Learning Time-Series Models Sensitivity Analysis for County Level COVID-19 Infections"

The International Conference for High Performance Computing, Networking, Storage, and Analysis (SC'23), Denver, Colorado, November 12–17, 2023.

Contributors

Professors

Judy Fox
Dr. Judy Fox
Data Science, Computer Science

PhD Students

Yingzheng Liu
Yingzheng Liu
Computer Science
Mia Yuan
Mia Yuan
Data Science
Khairul Islam
Md Khairul Islam
Computer Science

Masters Students

Andrej Erkelens
Andrej Erkelens
Data Science
Luke Benham
Luke Benham
Computer Science
Nicholas Daniello
Nicholas Daniello
Data Science
Aparna Marathe
Aparna Marathe
Data Science

Undergraduate Students

Nicholas Kellogg
Nicholas Kellogg
Data Science, Economics
Maxim Titov
Maxim Titov
Computer Engineering, Circuits
George Boulos
George Boulos
Systems Engineering
Shamsul Haque
Shamsul Haque
Computer Science
Zhengguang Wang
Zhengguang Wang
Computer Science, Statistics
Timothy Sue
Timothy Sue
Computer Science, Applied Mathematics
Kingsley Kim
Kingsley Kim
Computer Science, Physics, Astronomy
Ayush Karmacharya
Ayush Karmacharya
Computer Science, Economics
Tracy Hua
Tracy Hua
Systems Engineering, Data Science