Explainable AI: Bridging the Gap Between AI and Human Understanding

Authors

  • Dr. Usman Qamar Author
  • Dr. Kashif Bilal Author

Keywords:

Explainable AI (XAI), Artificial Intelligence (AI), Machine Learning (ML), Interpretability, Transparency, Model Accountability, Black-box Models, User Trust, Ethical AI, Human-AI Interaction, Model Explainability, Interdisciplinary Research, Fairness in AI, Context-aware Explanations, Real-time Explainability

Abstract

AI (Artificial Intelligence) has been quickening now in all the regions of our lives — medicine, finance, automobile industry, entertainment and what not. Nevertheless, the more advanced AI systems become, the less they reveal about how decisions are made, with far-reaching implications for trust, responsibility and explanation. Introducing Explainable AI (XAI) as a concept to deal with all the above problems and make humans understand, trust and communicate better with AI systems. In this paper, we seek to provide a timely study of the role and expectations of explainability in AI; in particular: what is currently achievable, and what problems still remain. This paper explores this crossover of AI and human interpretability, revealing the role that explainable AI plays between technical supremacy and the human comprehension engine while advocating for more ethical, accountable, reliable AI.

References

1. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why should I trust you?" Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135-1144.

2. Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.

3. Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 4765-4774.

4. European Union (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council. General Data Protection Regulation.

5. Samek, W., Wiegand, T., & Müller, K. R. (2017). Explainable artificial intelligence: Understanding, visualizing, and interpreting deep learning models. IEEE Signal Processing Magazine, 34(6), 17-20.

6. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys (CSUR), 51(5), 1-42.

7. Caruana, R. (1997). Multitask learning. Machine Learning, 28(1), 41-75.

8. Chander, S. (2020). Trust and accountability in explainable AI. Artificial Intelligence Review, 53, 435-452.

9. Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., & Kim, B. (2018). Sanity checks for saliency maps. Advances in Neural Information Processing Systems, 31.

10. Molnar, C. (2019). Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. Leanpub.

11. Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining explanations: An overview of interpretability of machine learning. IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), 80-89.

12. Zhang, Q., & Zhu, S. C. (2018). Visual interpretability for deep learning: A survey. Frontiers of Information Technology & Electronic Engineering, 19(1), 27-39.

13. Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1-38.

14. Rudin, C. (2019). Stop explaining black box machine learning models for high-stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206-215.

15. Ahmad, A., Husnain, A., Shiwlani, A., Hussain, A., Gondal, M. N., & Saeed, A. (2024). Ethical and clinical implications of AI integration in cardiovascular healthcare. World Journal of Advanced Research and Reviews, 23(3), 2479-2501. https://doi.org/10.30574/wjarr.2024.23.3.2907

16. Husnain, A., Saeed, A., Hussain, A., Ahmad, A., & Gondal, M. N. (2024). Harnessing AI for early detection of cardiovascular diseases: Insights from predictive models using patient data. International Journal for Multidisciplinary Research, 6(5). https://doi.org/10.36948/ijfmr.2024.v06i05.27878

17. Chen, JJ., Husnain, A., Cheng, WW. (2024). Exploring the Trade-Off Between Performance and Cost in Facial Recognition: Deep Learning Versus Traditional Computer Vision. In: Arai, K. (eds) Intelligent Systems and Applications. IntelliSys 2023. Lecture Notes in Networks and Systems, vol 823. Springer, Cham. https://doi.org/10.1007/978-3-031-47724-9_27

18. Husnain, A., & Saeed, A. (2024). AI-enhanced depression detection and therapy: Analyzing the VPSYC system. IRE Journals, 8(2), 162-168. https://doi.org/IRE1706118

Downloads

Published

2024-09-27

How to Cite

Explainable AI: Bridging the Gap Between AI and Human Understanding. (2024). AlgoVista: Journal of AI & Computer Science, 1(1). https://algovista.org/index.php/AVJCS/article/view/14

Similar Articles

1-10 of 26

You may also start an advanced similarity search for this article.