Rule Extraction in Trained Feedforward Deep Neural Networks - Integrating Cosine Similarity and Logic for Explainability

Authors

DOI:

https://doi.org/10.59471/raia2024203

Abstract

Explainability is a fundamental aspect in the field of machine learning, particularly in ensuring transparency and trust in decision-making processes.

As the complexity of machine learning models increases, the integration of neural and symbolic approaches has emerged as a promising solution to the explainability problem. In this context, the utilization of search methods for rule extraction in trained deep neural networks has been proven effective.
This involves the examination of weight and bias values generated by the network, typically through calculating the correlation between weight vectors and outputs.  The hypothesis developed in this article states that by incorporating cosine similarity in this process, the search space can be efficiently narrowed down to identify the critical path connecting inputs to results.
Furthermore, to provide a more comprehensive and interpretable understanding of the decision-making process, this article proposes the integration of first-order logic (FOL) in the rule extraction process. By leveraging cosine similarity and FOL, a groundbreaking algorithm that is capable of extracting and explaining the rule patterns learned by a feedforward trained neural network was designed and implemented. The algorithm was tested in three use cases showing effectiveness in providing insights into the model's behavior.

References

Aghaeipoor, F., Sabokrou, M., & Fernández, A. (2023). Fuzzy Rule-Based Explainer Systems for Deep Neural Networks: From Local Explainability to Global Understanding. IEEE Transactions on Fuzzy Systems, 1–12. https://doi.org/10.1109/TFUZZ.2023.3243935

Alpaydin, E. (2020). Introduction to Machine Learning. MIT Press.

Amarasinghe, K., Kenney, K., & Manic, M. (2018). Toward Explainable Deep Neural Network Based Anomaly Detection. 2018 11th International Conference on Human System Interaction (HSI), 311–317. https://doi.org/10.1109/HSI.2018.8430788

AmirHosseini, B., & Hosseini, R. (2019). An improved fuzzy-differential evolution approach applied to classification of tumors in liver CT scan images. Medical & Biological Engineering & Computing, 57(10), Article 10. https://doi.org/10.1007/s11517-019-02009-7

Angelov, P., & Soares, E. (2019). Towards Explainable Deep Neural Networks (xDNN) (arXiv:1912.02523). arXiv. http://arxiv.org/abs/1912.02523

Barbiero, P., Ciravegna, G., Giannini, F., Lió, P., Gori, M., & Melacci, S. (2022). Entropy-Based Logic Explanations of Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6046–6054. https://doi.org/10.1609/aaai.v36i6.20551

Battaglia, P. W., Hamrick, J. B., Bapst, V., Sanchez-Gonzalez, A., Zambaldi, V., Malinowski, M., Tacchetti, A., Raposo, D., Santoro, A., Faulkner, R., Gulcehre, C., Song, F., Ballard, A., Gilmer, J., Dahl, G., Vaswani, A., Allen, K., Nash, C., Langston, V., … Pascanu, R. (2018). Relational inductive biases, deep learning, and graph networks. arXiv:1806.01261 [Cs, Stat]. http://arxiv.org/abs/1806.01261

Bengio, Y., Goodfellow, I., & Courville, A. (2016, November 29). Deep Learning. https://www.deeplearningbook.org/

Burkhardt, S., Brugger, J., Wagner, N., Ahmadi, Z., Kersting, K., & Kramer, S. (2021). Rule Extraction From Binary Neural Networks With Convolutional Rules for Model Validation. Frontiers in Artificial Intelligence, 4, 642263. https://doi.org/10.3389/frai.2021.642263

Ciravegna, G., Barbiero, P., Giannini, F., Gori, M., Lió, P., Maggini, M., & Melacci, S. (2023). Logic Explained Networks. Artificial Intelligence, 314, 103822. https://doi.org/10.1016/j.artint.2022.103822

Cocarascu, O., Cyras, K., & Toni, F. (2018). Explanatory predictions with artificial neural networks and argumentation.

Csiszár, O., Csiszár, G., & Dombi, J. (2020). How to implement MCDM tools and continuous logic into neural computation?: Towards better interpretability of neural networks. Knowledge-Based Systems, 210, 106530. https://doi.org/10.1016/j.knosys.2020.106530

Dai, W.-Z., & Muggleton, S. H. (2021). Abductive Knowledge Induction From Raw Data (arXiv:2010.03514). arXiv. http://arxiv.org/abs/2010.03514

De, T., Giri, P., Mevawala, A., Nemani, R., & Deo, A. (2020). Explainable AI: A Hybrid Approach to Generate Human-Interpretable Explanation for Deep Learning Prediction. Procedia Computer Science, 168, 40–48. https://doi.org/10.1016/j.procs.2020.02.255

Dombi, J., & Csiszár, O. (2021). Interpretable Neural Networks Based on Continuous-Valued Logic and Multi-criteria Decision Operators. In J. Dombi & O. Csiszár (Eds.), Explainable Neural Networks Based on Fuzzy Logic and Multi-criteria Decision Tools (pp. 147–169). Springer International Publishing. https://doi.org/10.1007/978-3-030-72280-7_9

Domingos, P. (2018). How the Quest for the Ultimate Learning Machine will remake our World. In The Master Algorithm How.

Garcez, A. d’Avila, Gori, M., Lamb, L. C., Serafini, L., Spranger, M., & Tran, S. N. (2019). Neural-Symbolic Computing: An Effective Methodology for Principled Integration of Machine Learning and Reasoning (arXiv:1905.06088). arXiv. http://arxiv.org/abs/1905.06088

Garnelo, M., Arulkumaran, K., & Shanahan, M. (2016). Towards Deep Symbolic Reinforcement Learning. arXiv:1609.05518 [Cs]. http://arxiv.org/abs/1609.05518

Giunchiglia, E., Stoian, M. C., & Lukasiewicz, T. (2022). Deep Learning with Logical Constraints (arXiv:2205.00523). arXiv. http://arxiv.org/abs/2205.00523

Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. The MIT Press.

Krishnan, R., Sivakumar, G., & Bhattacharya, P. (1999). Extracting decision trees from trained neural networks. 12.

Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences. https://doi.org/10.1017/S0140525X16001837

Mahdavifar, S., & Ghorbani, A. A. (2020). DeNNeS: Deep embedded neural network expert system for detecting cyber attacks. Neural Computing and Applications, 32(18), Article 18. https://doi.org/10.1007/s00521-020-04830-w

Marcus, G. (2018). Deep Learning: A Critical Appraisal. arXiv:1801.00631 [Cs, Stat]. http://arxiv.org/abs/1801.00631

Montavon, G., Lapuschkin, S., Binder, A., Samek, W., & Müller, K.-R. (2017). Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recognition, 65, 211–222. https://doi.org/10.1016/j.patcog.2016.11.008

Negro, P. A., & Pons, C. (2023). Extracción de reglas en redes neuronales feedforward entrenadas con lógica de primer orden. Memorias de las JAIIO, 9(2), Article 2.

Negro, P., & Pons, C. (2022). Artificial Intelligence techniques based on the integration of symbolic logic and deep neural networks: A systematic review of the literature. Inteligencia Artificial, 25(69), 13–41. https://doi.org/10.4114/intartif.vol25iss69pp13-41

Nielsen, I. E., Dera, D., Rasool, G., Bouaynaya, N., & Ramachandran, R. P. (2022). Robust Explainability: A Tutorial on Gradient-Based Attribution Methods for Deep Neural Networks. IEEE Signal Processing Magazine, 39(4), 73–84. https://doi.org/10.1109/MSP.2022.3142719

Pons, C., Rosenfeld, R., & Smith, C. P. (2017). Lógica para Informática. Editorial de la Universidad Nacional de La Plata (EDULP). https://doi.org/10.35537/10915/61426

Rumelhart, D. E., Hintont, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors.

Russell, S., & Norvig, P. (2010). Artificial Intelligence A Modern Approach Third Edition. In Pearson. https://doi.org/10.1017/S0269888900007724

Samek, W., Montavon, G., Lapuschkin, S., Anders, C. J., & Muller, K.-R. (2021). Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications. Proceedings of the IEEE, 109(3), 247–278. https://doi.org/10.1109/JPROC.2021.3060483

Samek, W., Wiegand, T., & Müller, K.-R. (2017). Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models (arXiv:1708.08296). arXiv. http://arxiv.org/abs/1708.08296

Santos, R. T., Nievola, J. C., & Freitas, A. A. (2000). Extracting comprehensible rules from neural networks via genetic algorithms. 2000 IEEE Symposium on Combinations of Evolutionary Computation and Neural Networks. Proceedings of the First IEEE Symposium on Combinations of Evolutionary Computation and Neural Networks (Cat. No.00EX448), 130–139. https://doi.org/10.1109/ECNN.2000.886228

Schmid, U., & Finzel, B. (2020). Mutual Explanations for Cooperative Decision Making in Medicine. KI - Kunstliche Intelligenz, 34(2), Article 2. https://doi.org/10.1007/s13218-020-00633-2

Shahroudnejad, A. (2021). A Survey on Understanding, Visualizations, and Explanation of Deep Neural Networks (arXiv:2102.01792). arXiv. http://arxiv.org/abs/2102.01792

Tran, S. N. (2017). Unsupervised Neural-Symbolic Integration (arXiv:1706.01991). arXiv. http://arxiv.org/abs/1706.01991

Wang, W., & Pan, S. J. (2020). Integrating deep learning with logic fusion for information extraction. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 9225–9232.

Yann LeCun, Yoshua Bengio, G. H. (2015). Deep learning (2015), Y. LeCun, Y. Bengio and G. Hinton. Nature.

Zarlenga, M. E., Shams, Z., & Jamnik, M. (2021). Efficient Decompositional Rule Extraction for Deep Neural Networks (arXiv:2111.12628). arXiv. https://doi.org/10.48550/arXiv.2111.12628

Downloads

Published

2024-12-30

How to Cite

1.
Negro P, Pons C. Rule Extraction in Trained Feedforward Deep Neural Networks - Integrating Cosine Similarity and Logic for Explainability. Revista Abierta de Informática Aplicada [Internet]. 2024 Dec. 30 [cited 2025 Feb. 5];8(1):21-46. Available from: https://raia.revistasuai.ar/index.php/raia/article/view/203