A counterfactual-based approach to prevent crowding in intelligent subway systems

23 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: visualization or interpretation of learned representations
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Explainable AI, counterfactual explanation, crowding prediction, smart public transportation, data-driven modelling.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: Counterfactual explanations for crowd prevention in subways.
Abstract: Today, the cities we live in are far from being truly smart: overcrowding, pollution, and poor transportation management are still in the headlines. With wide-scale deployment of advanced Artificial Intelligence (AI) solutions, however, it is possible to reverse this course and apply appropriate countermeasures to take a step forward on the road to sustainability. In this research, explainable AI techniques are applied to provide public transportation experts with suggestions on how to control crowding on subway platforms by leveraging interpretable, rule-based models enhanced with counterfactual explanations. The experimental scenario relies on agent-based simulations of the De Ferrari Hitachi subway station of Genoa, Italy. Numerical results for both prediction of crowding and counterfactual (i.e., countermeasures) properties are encouraging. Moreover, an assessment of the quality of the proposed explainable methodology was submitted to a team of experts in the field to validate the model.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6983
Loading