ANALYSIS OF AN EXPLAINABLE STUDENT PERFORMANCE WITHIN ONLINE SYSTEM USING FEATURE ENGINEERING
Abstract
There are emerging concerns about the Fairness, Accountability, Transparency, and Ethics (FATE) of educational interventions supported by the use of Artificial Intelligence (AI) algorithms. One of the emerging methods for increasing trust in AI systems is to use eXplainable AI (XAI), which promotes the use of methods that produce transparent explanations and reasons for decisions AI systems make. Researchers from different disciplines work together to define, design, and evaluate explainable systems. Scholars from different disciplines focus on different objectives and fairly independent topics of Explainable AI research which poses challenges for identifying appropriate design and evaluation methodology and consolidating knowledge across efforts. We extract different data-driven features from students’ programming submissions and employ a stacked ensemble model to predict students’ final exam grades. We use SHAP a game-theory-based framework to explain the model predictions to help the stakeholders understand the impact of different programming behaviors on students success. Research projects and activities, including standardization efforts toward developing XAI for smart cities, are outlined in detail. The lessons learned from state-of-the-art research are summarized, and various technical challenges are discussed to shed new light on future research possibilities. The paper concludes by discussing opportunities challenges and future research needs for the effective incorporation of XAI in education. Further we provide summarized ready-to-use tables of evaluation methods and recommendations for different goals in Explainable AI research.